Per one tech forum this week: “Google has quietly installed an app on all Android devices called ‘Android System SafetyCore’. It claims to be a ‘security’ application, but whilst running in the background, it collects call logs, contacts, location, your microphone, and much more making this application ‘spyware’ and a HUGE privacy concern. It is strongly advised to uninstall this program if you can. To do this, navigate to 'Settings’ > 'Apps’, then delete the application.”
Google photos has been “searchable by name” for years now. Tell it the name of a face in one photo and it can go search (pretty successfully) through all your photos for other photos containing that person. And, of course, once told, it never forgets.
Is it still a service when you are the product? Or, are you being served? https://en.wikipedia.org/wiki/To_Serve_Man_(The_Twilight_Zone)
For people who have not read the article:
Forbes states that there is no indication that this app can or will “phone home”.
Its stated use is for other apps to scan an image they have access to find out what kind of thing it is (known as "classification"). For example, to find out if the picture you’ve been sent is a dick-pick so the app can blur it.
My understanding is that, if this is implemented correctly (a big ‘if’) this can be completely safe.
Apps requesting classification could be limited to only classifying files that they already have access to. Remember that android has a concept of “scoped storage” nowadays that let you restrict folder access. If this is the case, well it’s no less safe than not having SafetyCore at all. It just saves you space as companies like Signal, WhatsApp etc. no longer need to train and ship their own machine learning models inside their apps, as it becomes a common library / API any app can use.
It could, of course, if implemented incorrectly, allow apps to snoop without asking for file access. I don’t know enough to say.
Besides, you think that Google isn’t already scanning for things like CSAM? It’s been confirmed to be done on platforms like Google Photos well before SafetyCore was introduced, though I’ve not seen anything about it being done on devices yet (correct me if I’m wrong).
Forbes states that there is no indication that this app can or will “phone home”.
That doesn’t mean that it doesn’t. If it were open source, we could verify it. As is, it should not be trusted.
That would definitely be better.
The Graphene devs say it’s a local only service.
Open source would be better (and I can easily see open source alternatives being made if you’re not locked into a Google Android-based phone), but the idea is sound and I can deny network privileges to the app with Graphene so it doesn’t matter if it does decide to one day try to phone home… so I’ll give it a shot.
God I wish I could completely deny internet access to some of my apps on stock android. It’s obvious why they don’t allow it though.
Check out Netguard. It’s an app that pretends to be a VPN client so most of your traffic has to go through it - and then you can deny/allow internet access per app. Even works without root.
You can, if you root your phone. Unless it is not a thing anymore.
Issue is, a certain cult (christian dominionists), with the help of many billionaires (including Muskrat) have installed a fucking dictator in the USA, who are doing their vow to “save every soul on Earth from hell”. If you get a porn ban, it’ll phone not only home, but directly to the FBI’s new “moral police” unit.
the police of vice and virtue, just like SA has.
This is EXACTLY what Apple tried to do with their on-device CSAM detection, it had a ridiculous amount of safeties to protect people’s privacy and still it got shouted down
I’m interested in seeing what happens when Holy Google, for which most nerds have a blind spot, does the exact same thing
EDIT: from looking at the downvotes, it really seems that Google can do no wrong 😆 And Apple is always the bad guy in lemmy
it had a ridiculous amount of safeties to protect people’s privacy
The hell it did, that shit was gonna snitch on its users to law enforcement.
Nope.
A human checker would get a reduced quality copy after multiple CSAM matches. No police was to be called if the human checker didn’t verify a positive match
Your idea of flooding someone with fake matches that are actually cat pics wouldn’t have worked
That’s a fucking wiretap, yo
Google did end up doing exactly that, and what happened was, predictably, people were falsely accused of child abuse and CSAM.
im not surprised if they are also using an AI, which is very error prone.
Overall, I think this needs to be done by a neutral 3rd party. I just have no idea how such a 3rd party could stay neutral. Some with social media content moderation.
I have 5 kids. I’m almost certain my photo library of 15 years has a few completely innocent pictures where a naked infant/toddler might be present. I do not have the time to search 10,000+ pics for material that could be taken completely out of context and reported to authorities without my knowledge. Plus, I have quite a few “intimate” photos of my wife in there as well.
I refuse to consent to a corporation searching through my device on the basis of “well just in case”, as the ramifications of false positives can absolutely destroy someone’s life. The unfortunate truth is that “for your security” is a farce, and people who are actually stupid enough to intentionally create that kind of material are gonna find ways to do it regardless of what the law says.
Scanning everyone’s devices is a gross overreach and, given the way I’ve seen Google and other large corporations handle reports of actually-offensive material (i.e. they do fuck-all), I have serious doubts over the effectiveness of this program.
Apple had it report suspected matches, rather than warning locally
It got canceled because the fuzzy hashing algorithms turned out to be so insecure it’s unfixable (easy to plant false positives)
They were not “suspected” they had to be matches to actual CSAM.
And after that a reduced quality copy was shown to an actual human, not an AI like in Googles case.
So the false positive would slightly inconvenience a human checker for 15 seconds, not get you Swatted or your account closed
Yeah so here’s the next problem - downscaling attacks exists against those algorithms too.
Also, even if those attacks were prevented they’re still going to look through basically your whole album if you trigger the alert
And you’ll again inconvenience a human slightly as they look at a pixelated copy of a picture of a cat or some noise.
No cops are called, no accounts closed
The scaling attack specifically can make a photo sent to you look innocent to you and malicious to the reviewer, see the link above
The official reason they dropped it is because there were security concerns. The more likely reason was the massive outcry that occurs when Apple does these questionable things. Crickets when it’s Google.
The feature was re-added as a child safety feature called “Comminication Saftey” that is optional on a child accounts that will automatically block nudity sent to children.
Doing the scanning on-device doesn’t mean that the findings cannot be reported further. I don’t want others going thru my private stuff without asking - not even machine learning.
Gimme Linux phone, I’m ready for it.
if there was something that could run android apps virtualized, I’d switch in a heartbeat
Do you mean sandboxed?
not necessarily… I mean If they run under the same VM, I’d be fine with that as well…but having a sandboxed wrapper would for sure be nice.
Waydroid?
To be clear, I haven’t used it at all and have no idea how well it works.
I gave it a run on Ubuntu touch with a fair phone like 8 months ago… It was still pretty rough then.
I remember reading recently that it’s gotten better (haven’t tried myself so don’t hold me to it). I can say that Wayland in general has come a long way since I switched to Linux ~2 years ago
I have used Waydroid, mainly with FOSS apps, and although it has some rough edges, it does often work for just having one or two Android apps functionality.
Linux on mobile as a whole isn’t daily driver ready yet in my opinion. I’ve only tried pmOS on a OP6, but that seems to be a leading project on a well-supported phone (compared to the rest).
Every one of them can, AFAIK. I have a second cheap used phone I picked up to play with Ubuntu Touch and it has a system called Waydroid for this. Not quite seamless and you’ll want to use native when possible but it does work.
SailfishOS, PostmarketOS, Mobian, etc all also can use Waydroid or a similar thing
There are two solutions for that. One is Waydroid, which is basically what you’re describing. Another is android_translation_layer, which is closer to WINE in that it translates API calls to more native Linux ones, although that project is still in the alpha stages.
You can try both on desktop Linux if you’d like. Just don’t expect to run apps that require passing SafetyNet, like many banking apps.
I know about WayDroid, but never heard of ATL.
So yeah, while we have the fundamentals, we still don’t have an OS that’s stable enough as a daily driver on phones.
And this isn’t a Linux issue. It’s mostly because of proprietary drivers. GrapheneOS already has the issue that it only works on Pixel phones.
I can imagine, bringing a Linux only mobile OS to life is even harder. I wish android phones were designed in a way, that there is a driver layer and an OS layer, with standerdized APIs to simply swap the OS layer for any unix-like system.
Halium is basically what you’re talking about. It uses the Android HAL to run Linux.
The thing is, that also uses the Android kernel, meaning that there will essentially never be a kernel update since the kernel patches by Qualcomm have a ton of technical debt. The people working on porting mainline Linux to SoCs are essentially rewriting everything from scratch.
The Firefox Phone should’ve been a real contender. I just want a browser in my pocket that takes good pictures and plays podcasts.
too bad firefox is going through the way like google, they are updating thier privacy terms of usage.
Yep. I’m furious at Mozilla right now. But when the Firefox Phone was in development, they were one of the web’s heroes.
it says its only for LLM? as long as they dont try to expand the “privacy” in any case i download alternatives to the browsers anyways.
I’m mostly just frustrated that the best option has now become merely the lesser evil.
Unfortunately Mozilla is going the enshittification route more and more. Or good in this case that the Firefox Phone did not take of.
Is there some good Chromium browser with hardware video decoder support and a working adblocker, that is not Brave? Or which Firefox fork is recommended?
cromite for chrome, and ironfox for firefox?
deleted by creator
I’m sticking with Gecko for sure. Trying out Waterfox over the weekend on desktop, and Fennec F-Droid on my phone.
I just gave up and pre-ordered the Light Phone 3. Anytime I truly need a mobile app, I can just use an old iPhone and a WiFi connection.
Great, it’ll have to plow through ~30GB of 1080p recordings of darkness and my upstairs neighbors living it up in the AMs. And nothing else.
This is the stupidest shit, moral panic levels of miscomprehension. I mean, I was miffed and promptly removed safetycore because I don’t mind seeing sex organs and don’t want shit using battery for no reason, but wow
Forbes.Edit: ok, the article is not so bad, just the shitty blurb from some forum reproduced here on Lemmy.
Kind of weird that they are installing this dependency whether you will enable those planned scanning features or not. Here is an article mentioning that future feature Sensitive Content Warnings. It does sound kind of cool, less chance to accidentally send your dick pic to someone I guess.
Sensitive Content Warnings is an optional feature that blurs images that may contain nudity before viewing, and then prompts with a “speed bump” that contains help-finding resources and options, including to view the content. When the feature is enabled, and an image that may contain nudity is about to be sent or forwarded, it also provides a speed bump to remind users of the risks of sending nude imagery and preventing accidental shares.
All of this happens on-device to protect your privacy and keep end-to-end encrypted message content private to only sender and recipient. Sensitive Content Warnings doesn’t allow Google access to the contents of your images, nor does Google know that nudity may have been detected. This feature is opt-in for adults, managed via Android Settings, and is opt-out for users under 18 years of age.
Looks like more of a chance of false positives happening and getting the police to raid your home to confiscate your devices. I don’t care what the article says I know Google is getting access to that data because that’s who they are.
Just for verifying accuracy and improving the product.
Huh. My device seems to have been skipped? I don’t do anything special, I’m using Play Store and Play Services, and I’m up to date, but it’s not showing up in my settings app list
Pixel 7a here, it was installed and I have no idea when
Sometimes it uses a different name I have noticed, try to see if something with a similar is listed
i havent had it yet either. only suspicious thing that i notice is some android system intelligence, but that has been there for a while now. i havent dared to uninstall/deactivate it yet since i dont know if anything critical is dependent on it. havent even noticed any suspicious network activity either on rethink, beyond the usual bullshit like some uninstalled application still trying to connect to google as “unknown”.
Maybe they experimenting installing it on some phones, I had it but an different name. I couldn’t find it in my apps lists but when someone posted a direct link to play store app page it showed installed.
hmm, i looked it up myself and it doesnt seem to say its installed for me there. Cant find it by searching on my phone, only on my pc through search engine. But someone on comments there brought a good point by telling that his some old phone basically bricked because of this due to it being incompitable.
I also have fairphone, though i’m not sure if that really is the reason. Maybe they are indeed gradually installing it then.
Thank you was able to find and uninstall the app with no issues
Samsung? I was able to on my s23ultra
Google says that SafetyCore “provides on-device infrastructure for securely and privately performing classification to help users detect unwanted content. Users control SafetyCore, and SafetyCore only classifies specific content when an app requests it through an optionally enabled feature.”
GrapheneOS — an Android security developer — provides some comfort, that SafetyCore “doesn’t provide client-side scanning used to report things to Google or anyone else. It provides on-device machine learning models usable by applications to classify content as being spam, scams, malware, etc. This allows apps to check content locally without sharing it with a service and mark it with warnings for users.”
But GrapheneOS also points out that “it’s unfortunate that it’s not open source and released as part of the Android Open Source Project and the models also aren’t open let alone open source… We’d have no problem with having local neural network features for users, but they’d have to be open source.” Which gets to transparency again.
Graphene could easily allow for open source solutions to emulate the SafetyCore interface. Like how it handles Google’s location services.
There’s plenty of open source libraries and models for running local AI, seems like this is something that could be easily replicated in the FOSS world.
Hope they like all my dick pics
Don’t worry they won’t!
/Burn
deleted by creator
Most people don’t really know what that actually means, and they don’t feel they have anything to hide from some nebulous corporate entity.
deleted by creator
deleted by creator
why, what do you recommend?
I mean you have just disclaime the whole android ecosystem, and the only other alternative is Apple, which is questionable if it’s better.
and this would have even applied to my fairphone!
would have, if I didn’t get rid of google services the day I got it.I don’t have to recommend anything just because I’m asking why people are buying spyware tech.
Just like I may not know the proper way to safely jump out of an airplane, but I do know a parachute is involved.
A person asking why people do a thing that seems stupid isn’t obligated to solve the problem.
Then I guess the better question is what do you use?
Wouldn’t it be a given that I don’t have an android phone?
That’s what you don’t use, which wasn’t what they asked, right?
I just realized the network error made me doubly post my comment, I’ve deleted the other copy
No worries. Happens to all of us.
Per one tech forum this week
Stop spreading misinformation.
And what exactly does that have to do with GrapheneOS?
Have you even read the article you posted? It mentions these posts by GrapheneOS
Please, read the links. They are the security and privacy experts when it comes to Android. That’s their explanation of what this Android System SafetyCore actually is.
graphene folks have a real love for the word misinformation (and FUD, and brigading). That’s not you under there👻, Daniel, is it?
After 5 years of his
anticshateful bullshit lies, I think I can genuinely say that word triggers me.So is this really just a local AI model? Or is it something bigger? My S25 Ultra has the app but it hasn’t used any battery or data.
I mean the grapheneos devs say it is. Are they going to lie.
Yes, absolutely, and regularly, and without shame.
But not usually about technical stuff.
If the app did what op is claiming then the EU would have a field day fining google.
To quote the most salient post
The app doesn’t provide client-side scanning used to report things to Google or anyone else. It provides on-device machine learning models usable by applications to classify content as being spam, scams, malware, etc. This allows apps to check content locally without sharing it with a service and mark it with warnings for users.
Which is a sorely needed feature to tackle problems like SMS scams
if the cellular carriers were forced to verify that caller-ID (or SMS equivalent) was accurate SMS scams would disappear (or at least be weaker). Google shouldn’t have to do the job of the carriers, and if they wanted to implement this anyway they should let the user choose what service they want to perform the task similar to how they let the user choose which “Android system WebView” should be used.
Carriers don’t care. They are selling you data. They don’t care how it’s used. Google is selling you a phone. Apple held down the market for a long time for being the phone that has some of the best security. As an android user that makes me want to switch phones. Not carriers.
You don’t need advanced scanning technology running on every device with access to every single bit of data you ever seen to detect scam. You need telco operator to stop forwarding forged messages headers and… that’s it. Cheap, efficient, zero risk related to invasion of privacy through a piece of software you did not need but was put there “for your own good”.
Why do you need machine learning for detecting scams?
Is someone in 2025 trying to help you out of the goodness of their heart? No. Move on.
If you want to talk money then it is in businesses best interest that money from their users is being used on their products, not being scammed through the use of their products.
Secondly machine learning or algorithms can detect patterns in ways a human can’t. In some circles I’ve read that the programmers themselves can’t decipher in the code how the end result is spat out, just that the inputs will guide it. Besides the fact that scammers can circumvent any carefully laid down antispam, antiscam, anti-virus through traditional software, a learning algorithm will be magnitudes harder to bypass. Or easier. Depends on the algorithm
I don’t know the point of the first paragraph…scams are bad? Yes? Does anyone not agree? (I guess scammers)
For the second we are talking in the wild abstract, so I feel comfortable pointing out that every automated system humanity has come up with so far has pulled in our own biases and since ai models are trained by us, this should be no different. Second, if the models are fallible, you cannot talk about success without talking false positives. I don’t care if it blocks every scammer out there if it also blocks a message from my doctor. Until we have data on consensus between these new algorithms and desired outcomes, it’s pointless to claim they are better at X.
For those that have issues on Samsung devices: see here if you’re getting the “App not installed as package conflicts with an existing package” error :
If you have a Samsung device - uninstall the app also from Knox Secure Folder. Entering to Secure Folder>Settings>Apps
Fuck these cunt
Google harvesting all your data for profits. I’m shocked. Shocked I say.