A look at the global shift to open science, exploring how transparency, preregistration, and open data are becoming the new standard for credible research, benefiting scientists, funders and society.
Case study showcasing how secure, private transcription at scale can be achieved using Whisper and GitHub Copilot, demonstrating practical applications of AI in research environments while maintaining data privacy and security standards.
A production-ready local transcription workflow leveraging OpenAI's Whisper models that addresses the limitations of cloud-based solutions through complete data sovereignty, unlimited scale, reproducible processing and advanced quality control, while maintaining GDPR compliance.
`4authors-year-doi-url` is a CSL style designed to be as compact as possible while retaining the three most critical pieces of information for a reference: who (authors), when (year), and where to find it (DOI/URL).
Understanding the interplay between speech and gesture is crucial for linguistic and cognitive research. The current prototype, available on GitHub, aims to automate the analysis of temporal alignment between spoken demonstrative pronouns and pointing gestures in video recordings. By integrating computer vision (via Google’s MediaPipe) and speech recognition (using language-specific Vosk models) using Python, the workflow provides enriched video annotations and alignment data, offering valuable insights into deictic communication.
Reducing the impedance in electroencephalography (EEG) is crucial for capturing high-quality brain activity signals. This process involves ensuring that electrodes make optimal contact with the skin without harming the participant. Below are a few tips to achieve this using a blunt needle, electrolyte gel and gentle wiggling.
Researchers often make participants jump through hoops. Due to our personal blind spots, it seems easier to realise the full extent of these acrobatics when we consider the work of other researchers. In linguistic research, the acrobatics are often spurred by unnatural grammatical constructions.
Say, you need to set up a makeshift EEG lab in an office? Easy-peasy---only, try to move the hardware as little as possible, especially laptops with dongles sticking out. The rest is a trail of snapshots devoid of captions, a sink, a shower room and other paraphernalia, as this is only an ancillary, temporary, extraordinary little lab, and all those staples are within reach in our mainstream lab (see Ledwidge et al., 2018; Luck, 2014).
Electroencephalography (EEG) has become a cornerstone for understanding the intricate workings of the human brain in the field of neuroscience. However, EEG software and hardware come with their own set of constraints, particularly in the management of markers, also known as triggers. This article aims to shed light on these limitations and future prospects of marker management in EEG studies, while also introducing R functions that can help deal with vmrk files from BrainVision.
Electroencephalographic (EEG) signals are often contaminated by muscle artifacts such as blinks, jaw clenching and (of course) yawns, which generate electrical activity that can obscure the brain signals of interest. These artifacts typically manifest as large, abrupt changes in the EEG signal, complicating data interpretation and analysis. To mitigate these issues, participants can be instructed during the preparatory phase of the session to minimize blinking and to keep their facial muscles relaxed. Additionally, researchers can emphasize the importance of staying still and provide practice sessions to help participants become aware of their movements, thereby reducing the likelihood of muscle artifacts affecting the EEG recordings.