From how to live safely in a sciencefictional universe by Charles Yu.
I’m typing, even though strictly speaking I am using the TM-31’s cognitive-visual-motor-sound-activated recording module, which operates, as you might guess, by simultaneously tracking output from the user’s neural activity, voice, finger movements, retinal movements, and facial muscle contractions. It’s part keyboard, part microphone, part optical scan, and part brain scan. When I want to type, I raise my hands up in front of me, palms down, in a position approximating typing, and a virtual QWERTY layout materializes in front of me. When I want to switch to voice, I just start reading the book, and the unit switches to an auditory-recognition transcription system, converting my voice into modifications in the written text. If I get tired of typing and voice modes, I can simply read the text to myself, and the unit will track my eye movements to determine, with near-perfect accuracy, what word I am reading, based on the minute ups and downs, lefts and rights of my retinas, and then matching those movements, using brain activity data as a kind of rough double check, against the blood flow and heat output of various areas of my language- and concept-processing lobes and sublobes of my brain.