Mark Koester

Personal Blog. A Data-Driven Life. Minding the Interstitial Spaces since 2007.

Can Meditation Improve Your Attention? Self-Experiment Into Mindfulness and Cognitive Testing

The reported benefits of meditation are quite impressive: better physical health and happiness, less stress and anxiety, and even improved mental health and cognitive functions.

Unfortunately, I’m skeptical guy. I like data and research to back up such claims. Frankly, the advantages of meditation almost seem too good to be true. Is this a pseudoscience? Can meditation really improve our bodies and our minds? Which types are effective and how?

I recently bought the Muse Brain Sensing Headband in an attempt to ponder some of these questions about the mind and meditation. Muse is a wearable that tracks your brainwaves using EEG and provides real-time neurofeedback while you meditate. It’s relatively popular in the quantified self and biohacker space. Built using dry electrodes and seven sensors, it’s essentially a stripped down, four-channel EEG monitor (compared to standard ten-channel). The aim of the device is to quantify your mental states and subtly train you to meditate better through auditory cues (chirping birds and nature sounds).

In order to explore this claim about the cognitive benefits of meditation, I decided to do a self-experiment, also known as a n=1 or n-of-1 trial. For a few weeks, I took daily cognitive tests both before and after I either meditated with Muse for 10 minutes or did some other activity for roughly the same amount of time. Using neuropsychological tests used to measure cognitive impairment, the test results should be able to tell me if meditating provides any cognitive benefit in areas like attention, information processing, and reaction time.

An n-of-1 trial is an experiment done on a single person using a series of interventions over a period of time. Basically it’s a modification of the classic crossover design wherein the invervention and placebo are tested in alternating patterns. The interventions themselves are often blinded (meaning you don’t know which medication you are taking at one time), and they are either completely randomized or given in a balanced cross-over schedule (like ABBABAAB). Outcome measurements are taken throughout to record the effects. These could be biomarkers, tests, surveys or something else. After a few rounds, statistical analysis is used on the outcome measurements to see if there was an effect and to determine which intervention was best. Additionally, you typically note any adverse effects.

In my own case related to meditation effect, the outcome I wanted to measure was an improvement in my cognition. The intervention or experimental variable I was testing was meditation, compared to other activities, which would be the placebo. It was impossible to blind the intervention, since you obviously know if you are meditating or not, so you might call it an open label experiment.

The tests I used were taken from standard neuropsychological assessments, frequently used to diagnosis cognitive impairment and in clinical trials on different drugs and supplements. I took the tests in the same place and around the same time daily, shortly after breakfast and roughly one hour after waking up. After several weeks, I then did some statistical and data analysis to check and visualize the effects and to see if they were significant.


So, what were the results?

Meditation improved my attention and other cognitive functions! Unfortunately so did any other activity I did. In fact, the main takeaway from the final results showed that meditation was not the most significant variable on cognitive improvements. Instead, the experiment revealed that simply re-taking the tests led to both an intraday improvement (meaning an improvement in the before and after) and a cumulative, linear improvement (meaning my first 5 to 10 scores were lower than my last 5-10 scores). In short, what I found was my cognitive testing was improved mostly through a combination of practice and training effects. I got better not because I meditated, but because I tested again and again.

In the rest of this post, I want to look at this experiment and the results in a bit more detail and explain what happened and why. Hopefully by the end of the post you’ll understand what are cognitive tests, a bit more about meditation, and how to do your own self-experiment on the meditation effect!


A “TRY THIS AT HOME” WARNING NOTE: My account and approach to meditation described here should largely be safe for most anyone to try. That said, meditation can be an intense experience with a history of adverse effects (Love, 2018; Kornfield, 1979; Lindahl, 2017). Most meditators, especially 10-minutes-a-day types, should be fine.

“Citizen Science” Code and Data Analysis: Please do tell me how your meditation effect experiment goes. For the data analysis part, the code is available for free with QS Ledger, an open source tool for Quantified Self Personal Data Aggregation and Data Analysis. If you want a quick data analysis, send me your testing results by email, and I’d be happy to generate a PDF report for you!


My N=1, Self-Experiment Protocol: How to Test the Effect of Meditation on Attention

The short version of what I did: Over the span of 14 days I took daily cognitive test using Quantifie-Mind.com both before and after I either meditated or did some other activity for roughly 10 minutes. I then did some data analysis to examine at the effects.

Below is a description in greater detail. (Apologies for the pseudo-academic tone. Been reading a bit too much scientific literature lately.)

Methods and Trial design:

For this experiment, I used an online testing platform called Quantified Mind. The tests were Go / No-Go and Cued Attention, which focus mostly on visual attention and information processing as well as a degree of executive function, in form of inhibition control. I followed Quantified Mind’s meditation experiment protocol, which recommended that on alternating days I do the tests both before and after either meditating or an alternative activity. I took the tests each morning over the span of roughly two weeks.

SIDEBAR: What is meditation?

In What am I meditating for? In Pursuit of A Definition of Meditation, I attempted to tackle the question of what is meditation. While a lot features and factors can be used to define meditation and the different types, the best definition I’ve been able to come up with is that meditation is:

A multi-step process whose two principal components are 1. the methods or cognitive strategies used and 2. the enhanced mental states it brings about.

Overall, meditation involves several stages and rituals, but the end result is to bring you to a certain meditative state.

What I mean by this definition then is that different meditation types use different cognitive strategies (like concentrating on breath, counting breathes, open monitoring, etc.) in order to bring you towards a certain enhanced mental state. So, if you are trying classify meditation types, among other things, it is best to specify the methods used and target meditative or mental state achieved.

Intervention: 10 Minutes of Meditation vs. Another Activity

The main variable I was trying to test for was the effect of meditation. For my meditation practice, I used Muse, an EEG wearable monitor and neurofeedback meditation app. After putting on the device and syncing it with my phone, I did 10 minutes of neurofeedback mindfulness meditation, which is a form of concentration meditation where I attended to my breath and received auditory feedback on how calm I was.

A full dive into Muse and Neurofeedback-support meditation is outside the scope of this post, but to be more precise, based on our definition of meditation, here are the key characteristics:

  • Method: Mindfulness-focused concentrative meditation involving attention to breathing and supported by EEG neurofeedback based on calm mental state.
  • Enhanced Cognitive State: The intended mental state was a calm, relaxed state with attentive concentration on breathing.

For the active control, I did a few different activities, including reading and sorting computer files.

SIDEBAR: Neuropsychological and Cognitive Testing: Can we quantify cognitive functions with games and quizzes?

Screenshot showing a reaction time test from Quantified-Mind.com


All good experiments require a outcome measurement. This lets us know if a drug, intervention or some other change had an effect compared to the control. For our purposes, cognitive tests or neuropsychological assessments provide the real “litmus test” as to whether meditation any affect our attentional capability. Cognitive testing, which pre-dates much of today’s neuroimaging tools, remains a reliable method to study cognitive functions, like memory and executive control.

Used somewhat loosely and still ill-defined in much of the literature, cogntive functions describe the capacities of our brains in what they are able to accomplish. While language, social cognition and perceptual and motor functions are also often mentioned, the three most commonly cited cognitive functions are:

  1. Attention: Attention is a behavioral and cognitive process of selective concentrating on certain information while ignoring others. Attention, concentration or focus describes a cognitive function that is able to undertake goal-directed behavior amongst the plethora of sensory stimuli originated from our environment and minds. Attention is a state of arousal, which we might colloqually call our energy level. This means that our ability to pay attention depends on BOTH being awake and ready AND able to selective focus on certain inputs over others.
  2. Memory: Often viewed a unitary function, memory involves multiple capacities and interconnected ability. Broadly speaking, memory describes our ability to retain information. In technical sense, memory as a cognitive function refers to the ability of our brains to encode, store, and retrieve information when needed. This includes among other things our short-term or working memory.
  3. Executive Functions: Often referred to as our “higher” cognitive functions, executive functions entail planning, reasoning and orchestrated and controlled behavior as well as related aspects like inhibition control, intelligence (especially fluid intelligence), and even creativity.

Neuropsychological and cognitive testing describe the various methods devised to quantify and measure our minds. Much of cognitive testing aims to assess impairment. This can include screening for cognitive disorders and brain diseases as well as brain trauma like a concussion. Increasingly cognitive testing has been used to study neuroenhancement and efforts to augment our memory, reaction time and more.

Several dozen, if not hundreds, of cognitive and neuropsychological tests exist to measure different aspects of the human mind. Here are a few common ones and what they test:

  • Reaction Time: These tests assess how quickly a person can respond to a single stimulus (like a circle that turns from white to green) by pressing a button. Several variations exist, like choice reaction time, Go/No-Go, that require information processing and inhibition control. Reaction time tests are a relatively direct means of measuring cognitive functions like processing speed and various attentional deficits associated with aging, depression and neurological impairment (Lezak, 2004).
  • Stroop Task: This is a classic cognitive test focused on executive function. Named after the English psychologist John Ridley Stroop, during the test, participants are asked to attend alternatively to either the written name of the color (e.g. “red”) or noticing the printed color of those words. When the color of the letters making up the name differs from the colour the word describes, participants exhibit a delayed response by participants. The Stroop task has been used to measure frontal lobe dysfunction and aging among other effects (Lezak, 2004).
  • Digit Span: This is another classic test used to measure memory and, to some extent, attention. During the test, digits are presented to the participant one by one and then you are ask to repeat them back. There are a few variations including forwards, backgrounds, auditory, and visual as well as some involving repeating back a pattern of filled-in box on a grid. The normal range for Digits Forward is 6 ± 1, and the difference between digits forward and digits reversed tends is a bit above 1.0 (Lezak, 2004).

While most cognitive tests strive to isolate and test individual cognitive functions, the realitity is that these tests, like our minds themselves, rarely test a single function. In fact most testing requires a degree of several cognitive abilities for them to be done. For example, we need a degree of attention, executive control and memory to do most of what we do in life. Our adaptive minds are a tight coupling of several abilities and brain regions.

For our own purposes in this experiment, these tests provide the most reliable and objective way to measure any possible acute cognitive effects following meditation, meaning our scores should theoretically increase after meditation.

Outcome Measurements: Cognitive Tests on Quantified-Mind.com

In order to assess the impact of meditation on my cognition, I took daily cognitive tests before and after. After reviewing a few different tools, the testing platform I elected to use was Quantified-Mind.com. It’s free and provides a pretty wide range of standard and adapted neuropsychological tests, which can be used to test various cognitive abilities.

The specific tests used were:

  • Go/No-Go: In this test you are shown either a black screen with either a white square at the top (target stimuli) or a white square at the bottom (stimuli meant to be ignored). The cards are presented one by one and your task is to tap the space key when the target appears and ignore the non-target. Statistics are kept on how fast you react and your accuracy (successful taps vs. tapping on non-target), and a final test score is generated. This attention test is similar to the T.O.V.A.
  • Cued Attention (dot): This test present you with two boxes one on the left and one of the right. When a dot appears inside of one of the boxes, the task is tap the space bar. Distractions and non-target stimulus are provided by flashing boxes, which you need to ignore. The object is to tap only when the dot appears in the box. Accuracy and reaction time are required, and an aggregate score is calculated at the end. This is another attention test and quite similar to Go/No-Go and is based on the popular Posner spatial cueing task.

Both of these tests are primarily intended to quantify your attention and reaction time. They also test to an extent your executive function in the form of inhibitory control and speed of information processing.

While reaction time and accuracy were measured throughout each test, the only final information provided was a composite or aggregated score on each test. So the final outcome measurement used were test results of Go / No-Go and Cued Attention.

Sampling, Sequence and Schedule

In following the protocol from Quantified-Mind.com, I took the tests 28 times over two weeks. I took them in the same space and roughly the same time each morning, following breakfast and a coffee. I followed an alternating daily schedule of meditation on one day (A) and an alternative activity on the other (B). So the pattern was AB repeated. I took the initial tests, then did an activity and retook the tests. This resulted in a 14-day sample of 28 test results.

Data Analysis:

Unfortunately, at the time of writing, Quantified Mind does not provide actual analysis of the experiment’s test results. It only gives a few generic charts on scoring trend on individual tests over time.

So for that reason, data processing, exploratory statistical analysis and data visualization were all done using Python. I have made public the code I created as part of QS Ledger here. Besides a visual comparison, I used (frequentist) statistical methods to look at effect size based on differences between means using a simple comparison and the more robust Cohen’s d.

Some additional factors (sleep, HRV, exercise, etc.) and analysis on them have been will be provided in a later followup.

Results: Did Meditation Improve My Attention?

Results of Exploratory Data Analysis

FIGURE 1: Box-and-Whisker Plot showing an aggregate summary of scores in each test scenario. The most visually obvious improvement is in 1. Before and After Activity (Go/No-Go) and 2. Before/After Meditation (Cued Attention). Is this evidence of cognitive improvement?


The results of the experiment did not provide sufficient evidence to support the claim that meditation improves attention and related cognitive functions. By contrast, the strong effect appears to be practice effect (aka “test-retest”).

As summarized in Table 1, the highest scores were for the most part after either meditating or an activity. As showed in Table 2 and explained before, statistical analysis using Cohen’s D did indicate there was a sizable effect in before and after with the highest effect size being 1.08 in before/after meditation on Cued Attention test.



On the surface (as shown in FIGURE 1 and TABLE 2) this does seem to point towards cognitive improvements, especially in the improvement of Cued Attention following meditation. But when looking at FIGURE 2 showing the overall trend and before/after effects, the apparent effect of meditation appears to be more likely attributed to the practice or test-retest effect, rather than related to one of the test variables.

In fact, the main results of this experiment give more evidence to one’s ability to improve cognitive testing scores through practice on testing, rather than improvements caused by an activity like meditation. If we compare the initial five or six days of testing with last five or six days, we clearly see signs of overall improvements on both tests, regardless of meditation effect.


FIGURE 2: Scoring trends showing linear improvements in results of both before (blue) and after (orange). This is most indicative of practice effects both on linear scale of the experiment and on intraday testing, i.e. before and after.


Results of Statistical Analysis

Two statistical analysis methods were used.

First, a simple comparison of means was done on the results of the the four scenarios. Results indicated improvements between before and after on both meditation and activity. Here are the top five mean differences:

  1. Before/After Meditation: Cued Attention: 41.17
  2. Before Activity vs. Meditation: Cued Attention: 33.6
  3. Before/After Activity: Go / No-Go: 31.58.
  4. Before/After Meditation: Go / No-Go: 19.6
  5. After Activity and Meditation: Go / No-Go: 15.6

Second, Cohen’s d effect size was used to look at effect size and explore any potential significance. The results are provided in TABLE 2.

As a quick reminder, according to Cohen’s power primer (1992), d = 0.20 is considered a small effect size, d = 0.50 is a medium effect size, and d = 0.80 is a large effect size. Anything below 0.2 is generally viewed as no effect.

As shown in TABLE 2, the results showed a significant Cohen’s effect size in before and after for both meditation and activity. The highest specific effects were on Cued Attention in Before/After Meditation (1.08) and and Go / No-Go in Before/After Activity (0.50).

Once again this does appear to indicate a strong effect, and it would be easy to interpret the results as an endorsement of meditation as a key to cognitive improvements. But once again we see more the effects from re-testing.

Final Results: Practice Effect, rather than Meditation, Improve Cognition

As shown in TABLE 2, all testing results improved during re-testing. In aggregate of Before/After, Go / No-Go showed a Cohen Effect Size of 0.47 and Cued Attention in 0.68. This would indicate a medium effect overall. Due to the sampling size (28), further testing is likely needed to confirm the results in greater detail and with higher confidence.

While in this experiment, I was hoping to prove that meditation improves cognition, the evidence does not seem to show that. In fact, the strong effect difference appears to be simply re-taking the test shortly after just taking it. Moreover, repeatedly taking the tests over showed cumulative, linear improvements over time (FIGURE 2). In short, this experiment showed the practice effect.

The practice effect is a well-known issue in neuropsychological and cognitive testing (Collie, 2003); Wesnes, 2002). It describes the phenomenon where testing results improve simply by retaking a test, regardless of any other changes or interventions. This can lead to the false sense that an intervention causes an improvement, rather than merely following along with improvements gained by retesting. A few reasons have been proposed for why this happens, including greater familiarity with procedure, learning, and even a placebo effect.

In traditional experiments, there are a few ways to overcome the practice effect and linear trends in general. One method is to take different tests that mitigate against practice effects on a repeat taking of the same test. So at the start of the experiment you might take testing battery A and then during the experiment you’d take testing battery B. At the end you would either repeat A or even do C.

Arguably the best method to deal with the practice effect is through experimental design using either parallel groups or a crossover design. Parallel groups allows you to compere the size of the improvements between groups, and, as such, in spite of the practice effects, it lets you see if improvements are greater for the test intervention compared to a control. One problem here can be a placebo effect if you do not provide an active control, meaning a control that appears to also have benefits to cognition. For example, in this experiment you might use a relaxation technique as an active control to meditation.

In contrast, a crossover design is the best method in our particular case. Crossover experimental design involves periodic switching between the interventions and the control with the same measurements throughout (Bose, 2009; Lui, 2016). This means you receive both the control and the intervention and have measurements from both situations. Subsequently during analysis you compare this different blocks. There are some issues with crossover designs, especially with drug testing, due to carryover effects that often require washout periods. In most cases though, crossover design provides a reliable solution for dealing with practice effects among other experimental challenges.

There are a few ways to do this but a commonly recommended cross-over schedule looks something like ABBABAAB. This type of design provides a way to capture linear trends and enough of a balance that both control and intervention get tested equally and fairly.

The protocol provide by Quantified Mind could be described as a “naive” crossover design since it simply alternated between A (meditation) and B (alternative activity). This makes sense from an easy of implementation. While not the most optimal, it worked reasonably well for this experiment and provided enough data to show the practice effect. In retrospect a more robust AB/BA design would be my recommendation (For example, ABBABAAB x 4 = 28).

Discussion of Final Results and Experiment

Overall, I believe the primary conclusion of this experiment should be that the results pointed to a limited or null effect of meditation, at least compared to either an alternative activity or simply re-test, practice effects. As such, evidence does not point towards meditation as an acute means to improving subsequent cognition tests.

Due to the limited number of samples (28) and short period (14 days), these conclusions should not be viewed as definitive proof on any of the core questions we hoped to answer, most notably: Does Meditation Improve Your Attention?

I believe these initial results offer a good starting point for both more research and future experiments that includes more testing samples, differerent meditation types, and a more robust crossover design. Ideally, along with more samples, a wider range of activities and meditations can offer clues into acute effects of pre-testing situations and activities. Our question then becomes: What activities best cue or prime us towards higher performance for cognition as measured by cognitive tests?

Conclusion: Meditating for Mental States or Neuroenhancement?

Brain studies and how to track my mind have been two of my learning goals this year. I bought the Muse EEG brainwave monitor and neurofeedback meditation tool as a way to explore these interests, and I plan to dig deeper into this topic more this year.

Meditation is an interesting test case for neuroenhancement since it is non-pharmaceutical and there appears to be a long history of humans believing there are benefits to meditating. I decided meditation deserved further study, especially if there was a way to quantify it using a device like Muse. This experiment on cognitive testing and meditation provided a good starting point to test a few questions and to explore a number of interrelated topics related to the mind.

Going into the experiment and based on my previous meditation experiences, my “naive” working hypothesis was that meditation would improve my test scores on attention. It seemed obvious that meditation as a method to bring you more in-tune with your mind was a way to level up the brain and it should be recognizable through neuropsychological testing, especially those looking at attention and reaction time. I didn’t expect huge effects, but I expected some. I even initially typed an introduction to this post under that assumption.

Unfortunately, so far, I was not able to prove significant cognitive benefits from meditation alone during my n=1 crossover experiment comparing meditation and an alternative activity by measuring cognitive testing results. Instead, what I found was that cognitive testing as such (meaning repeat testing) and done over time were the largest factors in my testing improvements. By taking the tests repeatedly and over time lead to the biggest improvement, not meditation.

On the one hand, we might interpret the results of this experiment into the cognitive effects of meditation as rather disappointing. Initially I was disappointed too. After all of this time meditating, it seems it wasn’t bringing any acute, transferrable benefits to my cognitive performance. But maybe we shouldn’t be so quick to discount the benefits of meditation. For one thing, this experiment did not test long-term effects nor the question of dosage. I can’t say if the linear improvements were due completely to re-testing and practice or if a small part came from my meditation. I suspect meditation didn’t change anything since changing our brains likely requires more than an hour or two of meditative practice.

On the other hand, my cognitive testing improvements are a positive and interesting result, since it shows we can improve our minds in a way. Whether this is the practice effect or a training effect is impossible to determine in this particular experiment. It might indicate the possibility that so-called “brain games” and cognitive training are a way to improve our minds. Unfortunately, the research is a bit mixed on the benefits. There is research showing training does improve cognitive testing results, but most also show that the effects don’t transfer. What this means is an improvement in certain cognitive tests don’t result in an overall improvement in other areas. In general I’m not planning to invest my time into brain training, but in view of some cases where the effects were transferrable, especially in memory, it isn’t something we should completely count out.

Besides a personal verification of the practice and training effects, three observations stood out to me from this experiment.

First, we probably shouldn’t expect acute cognitive benefits from meditation. In fact, meditation may actually slow our cognition in some ways. While I expected the opposite, this result is not surprising when you think that in most meditation practices you close your eyes and center your attention on yourself. This would mean a shifting of cognitive resources away from situational and sensory awareness to certain inner watchfulness. One good fatham meditation hinders rather than enhances certain visual attentive abilities. For example, I’d be interested to see how other forms of meditation, especially open-eyed and open monitoring, might offer potentially different results on my attention testing.

Second, this might come as a surprise, but, in spite of the negative results on the effect of meditation on my cognition, I plan to continue meditating. For one thing more testing and research is needed on neuroenhancement potential with meditation as well as the other reported health benefits. But, to be honest, as I realized during writing my post on What is Meditation?, meditation isn’t just about the benefits. My definition of meditation as a multi-step process of methods and cognitive states intentionally leaves out anything to do with benefits. Instead, meditation has historically been and remains a process about reaching and exploring different mental states. For the purpose of exploring and understanding our minds, meditation is one of the most amazing phenomenological tools we have.

Meditation is one of the most robust human traditions ever assembled of humans monitoring their minds, literally minds “minding” minds. Interestingly, meditation in the Buddhist tradition can be understood to some extent as one’s personal striving to cultivate a mind capable of self- and universal understanding. Leaving aside the religious and belief claim, it’s not all that surprising how often the study of the mind and brain overlaps with various meditative practices. Neurologists and researchers have even used Buddhist meditators and texts as phenomenological guides into different mental states and consciousness (Depraz, 2003; Varela, 2017, orig 1992). For that reason among others, I plan to continue meditating.

Third, when it comes to the question of neuroenhancement and meditation, dosage is a pretty substantial one. In fact, like the question of what is meditation, I find too much of the academic literature on meditation and its benefits ignores or fails to answer the pointed questions of how much and how long. Practically-speaking how much do we need to meditate to expect a health or cognitive benefit? Is a little bit daily enough or do we need to invest in a more substantial practice? What type of meditation is most effective toward certain goals? Furthermore, can technology, specifically neurofeedback, entrain your brain to reach certain states faster and shorten the time need to get to meditation-induced brain changes?

Frankly, my current experiment wasn’t designed to answer these questions. A deeper exploration is needed into EEG, brainwaves and neurofeedback in general as well as further testing.

While meditation might be one way, for now, I think we are best sticking with the more well-established routes to cognitive enhancement: exercise, good health, music playing, sleep, learning foreign languages and new skills, and possibly even nootropics.


Best of luck and happy tracking!


References:
  • Bose, M., & Dey, A. (2009). Optimal Crossover Designs. World Scientific.
  • Cohen, J. (1992). A power primer. Psychological bulletin.
  • Collie, A., Maruff, P., & Darby…, D. G. (2003). The effects of practice on the cognitive test performance of neurologically normal individuals assessed at brief test–retest intervals. Journal of Clinical and Experimental Neuropsychology.
  • Depraz, N., Varela, F. J., & Vermersch, P. (2003). On Becoming Aware. John Benjamins Publishing.
  • Hodges, J. R. (2017). Cognitive Assessment for Clinicians. Oxford University Press.
  • Kornfield, J. (1979). Intensive insight meditation: A phenomenological study. The Journal of Transpersonal Psychology, 11(1), 41.
  • Lezak, M. D., Howieson, D. B., Loring, D. W., Fischer, J. S., & others. (2004). Neuropsychological Assessment. Oxford University Press, USA.
  • Lindahl, J. R., Fisher, N. E., Cooper, D. J., Rosen, R. K., & Britton, W. B. (2017). The varieties of contemplative experience: A mixed-methods study of meditation-related challenges in Western Buddhists. PloS one, 12(5).
  • Lui, K.-J. (2016). Crossover Designs. John Wiley & Sons.
  • Love, Shayla (2018). Meditation Is a Powerful Mental Tool—and For Some People It Goes Terribly Wrong. Vice.com. Available at https://www.vice.com/en_us/article/vbaedd/meditation-is-a-powerful-mental-tool-and-for-some-it-goes-terribly-wrong.
  • More, L., Lauterborn, J. C., Papaleo, F., & Brambilla, R. (2019). Enhancing cognition through pharmacological and environmental interventions: Examples from preclinical models of neurodevelopmental disorders. Neuroscience & Biobehavioral Reviews.
  • Nash, J. D., & Newberg, A. (2013). Toward a unifying taxonomy and definition for meditation. Frontiers in psychology, 4, 806.
  • Ospina, M. B. (2008). Meditation Practices for Health. DIANE Publishing.
  • Posner, M. I. (1980). Orienting of attention. Quarterly journal of experimental psychology, 32(1), 3-25.
  • Varela, F. J., Thompson, E., & Rosch, E. (2017, orig 1992). The Embodied Mind. MIT Press.
  • Van Dam, N. T., van Vugt, M. K., Vago, D. R., Schmalzl, L., Saron, C. D., Olendzki, A. et al. (2018). Mind the hype: A critical evaluation and prescriptive agenda for research on mindfulness and meditation. Perspectives on Psychological Science, 13(1), 36-61.
  • Wesnes, K., & Pincock, C. (2002). Practice effects on cognitive tasks: a major problem? The Lancet Neurology, 1(8), 473.

Appendix of Additional Figures

Figure 3: Raw Scoring Trend Throughout Experiment


Meditation Metrics According to Muse

Comments