„How Apple is putting voices in users’ heads — literally“
- Einerseits unterstreicht der Beitrag Apples fortlaufende Arbeit an Accessibility-Features (von denen ich täglich die VoiceOver-Sprachausgabe nutze um mir Artikel auf Webseiten vorlesen zu lassen).
For now, the implant and Apple’s new technology is more than enough for Mathias Bahnmueller. Before he had the surgery, Bahnmueller’s hearing difficulties were getting in the way of his job—he was unable to follow presentations at board meetings, for instance. They cut him off from his loved ones, too. He would ask his 10-year-old daughter to repeat what she’d said, and when she’d answer, “Never mind, it wasn’t important,” he’d be devastated.
Now that he has the implant, he can hear his daughter the first time she speaks. Using his new device, he listens to audiobooks streamed directly to his skull. And when he recently went to a noisy brewpub on date night with his wife, he pulled out his phone, changed the settings, and focused only on what she said.
- Anderseits verrät er Details über Bluetooth Low Energy Audio – eine Eigenentwicklung, die Apple bereits seit iOS 9 ausliefert, es bis heute aber keinem erzählt hat.
To solve the huge problem of streaming high-quality audio without quickly draining the tiny zinc batteries in hearing aids, Apple had previously developed a new technology called Bluetooth LEA, or Low Energy Audio. The company released that (but didn’t talk about it) when the first Made for iPhone hearing aids appeared in 2014. Previously, the low-energy standard for Bluetooth—called LE—was used, as its name implies, only for tasks that are parsimonious in sending data, such as getting readings from heart rate monitors and FitBits. Apple says that LEA is the first use of the low-energy standard to stream high-quality music and voice while preserving LE’s battery-extending properties.
Steven Levy | Wired