• irotsoma@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 hours ago

    I don’t think the NSA gets it directly unless they installed an app on your device which if they’re using Google and Apple to do that for them could be fairly hidden.

    But a lot of the apps people have installed do listen when people don’t expect it, for commercial purposes. That information is then available to the NSA or any other law enforcement around the world basically at will. But there are things you can do to prevent that. Like not installing untrustworthy apps and if you have to, disabling their access to the microphone, storage, etc., if you have a device that allows that level of control.

    But there isn’t a blanket, listen to everything and record it kind of thing going on or you’d be using a lot more bandwidth. Most devices aren’t powerful enough to even do voice recognition beyond a few key words, but definitely not full realtime transcription, so the audio would have to be passed to a server.

    The real issue for now is things like keyboard apps and messaging apps that send everything you type, or the multitude of apps that don’t actually do user to user encryption, but decrypt in the middle so the data can be stored, combined, and compressed which makes it available to commercial interests and law enforcement.

      • magic_smoke@links.hackliberty.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 hour ago

        At least the proprietary ones made by five eyes. Russia and China have their own programs.

        There’s a reason I run Pfsense routers, openwrt AP’s, and mikrotik switches.

      • irotsoma@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        To some extent, that’s true. But getting the data off of your phone is the first step. And that is where you have the most control and the bottleneck of poor internet service and data caps prevents transmitting too much data for now.

        Audio data can only be compressed so far before it becomes impossible for a server to transcribe it. And you’re talking about a constant stream of background audio which means you can’t afford to lose much of that data to compression at all. The device might be able to differentiate speech from background noise and only send the stream to the server when someone is speaking, but that’s about it for the large majority of devices.

        To be able to interpret all speech, including accents, takes a server still, unless it’s a high end device and you don’t mind battery drain making the user suspicious. It’s just not feasible with current processor, battery, and bandwidth limitations to listen to everything.