top of page
A study participant wearing an electrode cap in his living room, and three researchers

Our Philosophy & Research Goals

CAMBI is based on the principle that accessible multimodal brain-body interfaces must be developed and evaluated by an integrated interdisciplinary team of professionals and end-users.

 

Our goals are big, and the needs of end-users are even bigger.

Our research goals are ambitious. We plan to customize accessible multimodal brain-body interfaces for each individual user, placing brain-computer interfaces (BCIs) within an assistive technology framework. Our primary objectives relate to increasing participation and communication for individuals with complex communication needs and severe motor impairments, especially those who experience locked-in syndrome.

CAMBI strives to realize:

  • Participatory Action Research – including the end user, potential end users, care providers and family members in all stages of development and evaluation

  • User Centered Design – understanding end user needs and iteratively designing and evaluating systems that incorporate the end users’ contexts and feedback

  • A strong focus on outcomes for people with disabilities rather than on technology alone

​​No single profession or discipline can have a broad enough perspective or experience to address all the challenges that come with bringing BCIs and multimodal alternative access methods into the community for functional use. Our work is based on five integrated teams with expertise in: (1) signal acquisition and processing; (2) natural language processing; (3) cognitive neurophysiology; (4) clinical rehabilitation; and (5) software engineering and technology implementation. Finally, we partner with individuals with disabilities and their families, who share their expertise in living with communication and physical impairments and provide input on the needs of end-users.

Current Projects

Optimizing BCI-FIT: Brain-Computer Interface Functional Implementation Toolkit
Many of the estimated four million adults in the U.S. with severe speech and physical impairments (SSPI) resulting from neurodevelopmental or neurodegenerative diseases cannot rely on current assistive technologies (AT) for communication. During a single day, or as their disease progresses, they may transition from one access technology to another due to fatigue, medications, changing physical status, or progressive motor dysfunction. There are currently no clinical or AT solutions that adapt to the multiple, dynamic access needs of these individuals, leaving many people poorly served. This competitive renewal, called BCI-FIT (Brain-Computer Interface Functional Implementation Toolkit) adds to our innovative multidisciplinary translational research conducted over the past 11 years for the advancement of science related to non-invasive BCIs for communication for these clinical populations. BCI-FIT relies on active inference and transfer learning to customize a completely adaptive intent estimation classifier to each user's multiple modality signals in real-time. The BCI-FIT acronym has many implications: our BCI fits to each user's brain signals; to the environment, offering relevant personal language; to the user's internal states, adjusting signals based on drowsiness, medications, physical and cognitive abilities; and to users' learning patterns from BCI introduction to expert use.

Ethical considerations for language modeling within brain-computer interfaces
Machine learning (ML) and natural language processing (NLP) have the potential to transform communication for patients with neurodegenerative disease through personalized and real-time augmentative and alternative communication (AAC) devices. Individuals with severe communication impairments who can no longer control their daily conversations or participate in previous life roles using speech often use AAC devices. ML and NLP are emerging as promising tools to bridge current technology and next-generation devices for individuals with the most severe speech and physical impairments. NLP efforts to combine large public data sets with private data sets, such as personal email messages, promise to give individuals with communication impairments their own personalized language models, models that are sufficiently robust to get closer to real-time communication. The focus on getting AAC-BCIs to work with machine learning, however, has led to a critical oversight in the field: an inadequate understanding of why individuals want next-generation devices and what trade-offs they are willing to make for faster and more personalized communication. The turn to ML brings this oversight into sharp relief. Individuals should provide input about the data sets used to construct their personal language models, but this raises important ethical questions about what individuals value, how they understand their identity, and what trade-offs they are willing to make relative to their personalized communication data. The goal of this project is to fill this gap in understanding so that researchers can implement machine learning into next generation AAC-BCI systems in a way that is sensitive to the ethical concerns of future users.

Enhancing an open-source brain-computer interface software for greater adoption and physiologic data sharing

There are high technological and software demands associated with conducting brain-computer interface (BCI) research. BCIs are computer-facilitated systems that rely on direct, real-time measures of brain activity for environmental interaction. In order to accelerate the development and accessibility of BCIs, our team created an open-source software, BciPy, written in Python and available on GitHub. This project increases user community engagement and data sharing through enhanced tooling and integration with cloud services to ensure any experimental data collected through our open-source library are readily accessible and adequately curated. Additionally, a unique scientific contribution for this data science project is our dataset: physiologic data acquired from people with severe speech and physical impairments secondary to locked-in syndrome for use in BCI research. The project allows cloud-based data sharing to achieve the quality recognized by the FAIR (Findable, Accessible, Interoperable, and Reusable) guiding principles for scientific data management and stewardship and will move translational science forward to improve the health and participation potential of individuals with severe disabilities. For more information about data sharing, see our Resources page.

Recent Results

Dan Klee presents on target-related alpha attenuation during BCI tasks (Society for Neuroscience 2021)

Deirdre McLaughlin presents on the effects of BCI software on reading in mild Alzheimer's disease (BCI Meeting 2021)

Betts Peters presents on including participants with speech and physical impairments in BCI research (BCI Meeting 2021)

Tab Memmott presents on using tripolar electrodes to extract gamma activity in a BCI spelling paradigm (BCI Meeting 2021)

bottom of page