@misk I think your federation software is broken. In Mastodon, the urls in your posts just lead back to themselves every time, not out to an external article.
@[email protected] @[email protected] same thing happens for me, i use sharkey on my instance (misskey fork) and i have to go to that linked post and click the link there to access it
I’m not sure if you’ll get this reply @[email protected], but here’s the link visible from Lemmy itself: https://tuta.com/blog/digital-fingerprinting-worse-than-cookies.
Your method of accessing this Lemmy community seems not to be working on your side somehow. You might try a different app - I’ve never used Mastodon so I don’t know what might work.
@OpenStars That was my point. I can open the post on its own server and see it as intended. But the federation part of the Lemmy software is clearly not generating the right data. It should embed the Tuta.com link instead of linking back to the post itself.
What I mean is, the link in a Lemmy community when viewed from a Lemmy instance works just fine. So it’s not broken at that level.
I can’t speak to how it comes across to Mastodon, or your particular method of access to that, as you showed in your screenshot. In general, instances running the Mbin software seem to work better to access both Lemmy and Mastodon, but overall communication between Mastodon and Lemmy seems not perfect, as you said.
Sir, this is a Lemmy’s.
It’s all Fediverse. You can follow things on lemmy on mastodon and vice versa and so on.
I’m aware but the degree of compatibility differs. Lemmy to Mastodon is pretty smooth but subOP is using some different microblogging platform it seems.
I loled
@mighty_orbot @misk I’m using Friendica. From here, the links are normal. As it’s also not Lemmy, I guess it’s a Mastodon-specific (or even instance-specific) problem.
Mbin will now load pictures within the comment?!
Unlock Origin, Ghostery, and what else? Scriptmonkey maybe?
They’ll stop it.
Ooooh, no they won’t stop this. It’s the workaround for tracking with all the things you just mentioned.
You have to either mask the fingerprint like how Brave does, or spoof the headers and block JS to make the fingerprint useless.
If that’s what it takes. It’s worth it.
Nope. Try Creep.js. It is real creepy.
new? isn’t this at least like a decade old method of tracking?
You’d THINK the article would link to a source about the fingerprinting in question instead of 90% filler slop and ads for their own service… Anyone got a link?
What is it you’re looking for? Do you want to know what kinds of information is used for fingerprinting?
If so, check out coveryourtracks.eff.org and amiunique.org.
I’m aware of fingerprinting techniques, thank you. The article is claiming that Google will start using some of those and I’m looking for the source for that claim, hopefully with specifics about which techniques are involved. Confusingly, the article does not appear to provide such a source.
Thanks – that’s an announcement about policy updates. I already read it and it says nothing about fingerprinting. The only change to underlying technologies it mentions is the use of e.g. trusted execution environments (the doc for which, per a further link, is in fact on github). Those seem to claim that they let announcers run ad campaigns through Google ads while keeping their campaign data provably locked away from Google. So, basically, all these links are about purported “privacy-enhancing” techs, and you’d be forgiven for taking that with an enormous grain of salt, but either way, nothing in there about fingerprinting.
The Guardian article basically paraphrases the Tuta one – or it’s the other way around, maybe – but does also not provide actual sources.
I just want a source on what fingerprinting Tuta is claiming Google will start using. I feel like the details of the purported fingerprinting techniques should be front and center to this discussion and I’m frustrated that the article entirely fails to provide that info.
Yeah I also looked into it and there seems no concrete information on that, just speculation about the policy change, like this one:
“While Google doesn’t explicitly state that IP addresses and other fingerprint methods are now allowed, the Privacy Disclosure section of Google’s February 16th Platforms Program Policies now explicitly mentions ‘cookies, web beacons, IP addresses, or other identifiers.’”
When you dive into it, it does look more like companies that sell encryption and VPNs using some potential danger to get more subscribers.
Ah, that Techlicious link is a great find, thanks. It does lay out clearly what the theoretical concern is. That’s still a far cry from the “Google will start fingerprintint you” scenario that seems to have people up in arms.
Thanks for digging out this link, I really appreciate it.
Time for a user agent switcher. Like “Yeah, I swear, I’m a PS5, that has only monospaced comic sans insrelled”
Fingerprinting unfortunately uses more than useragent strings. It takes hashes of data in your browser from a javascript context that is not easily masked or removed. For example, it might render a gradient of colors projected onto a curved 3d plane. The specific result of this will create a unique hash for your GPU. They can also approximate your geolocation by abusing the time-to-live information within a TCP packet, which is something you can’t control on the clientside at all. If you TRULY want to avoid tracking by google, you need to block google domains in your hosts file and maybe consider disabling javascript on all sites by default until you trust them. Also don’t use google.
How must it feel being clever enough to come up with these ideas and then implement them for companies invading everyones privacy for advertisement revenue and malicious information serving or stealing.
I guess they sleep soundly on a fat bank account.
Jokes aside, keep in mind that the idea of fingerprinting is that your computer’s configuration is as unique as a fingerprint (e.g., your monitor is x resolution, you are on this operating system, you are using these following extensions in this browser, you have these fonts on your system).
Setting your user agent to something super unique is basically shining a spotlight on yourself.
It’s way worse than that.
Even if you somehow magically have the same settings as everyone else, you’re mouse movement will still be unique.
You can even render something on a canvas out of view and depending on your GPU, your graphics driver, etc the text will look different…
There is no real way to escape fingerprinting.
I have a novice coding question using the mouse tracking as an example: Is it possible to intercept and replace mouse tracking data with generic inputs? For example, could you implement an overlay that blocks mouse interactions, and instead of physically clicking on elements, send a direct packet to the application to simulate selecting those elements?
Yes, it’s possible. That’s the way a lot of automated web UI testing tools work. The problem with doing it during normal browser use is that your intentional actions with the real mouse wouldn’t work right, or the page would start acting like you clicked on things you didn’t click on.
We need Richard Hendricks and his new internet asap
What’s this about? Fill me in? 🙏
Google can’t fingerprint you very well if you block all scripts from Google.
Considering how few people block all scripts, this could also make it trivial for them to fingerprint you.
plus Random User Agent.
Random User Agent.
I love this.
Anyone who uses uBlock blocks Google scripts.
uBlock Origin + PiHole FTW.
So I thought this is never going to fly under GDPR. Then the article goes on to say:
Many privacy laws, including the EU’s GDPR and California’s CCPA, require user consent for tracking. However, because fingerprinting works without explicit storage of user data on a device, companies may argue that existing laws do not apply which creates a legal gray area that benefits advertisers over consumers.
Oh come on Google, seriously? I remember a time when Google were the good guys, can’t believe how they’ve changed…
That time was like 20 years ago, dude
Oh absolutely. At this point I’m not surprised anymore that they turned to shit, it’s more like I think they’ve hit rock bottom already but they manage to surprise me with new ways to dig their hole even deeper.
It’s still sad to see the development. We’re allowed to mourn things that happened long ago, you know.
Google were maybe seen as the good guys back in the days of Yahoo search, and perhaps the very early days of Android.
But those times are so long passed. Google has been a tax-avoiding, anti-consumer rights, search-rigging, anti-privacy behemoth for decades now, and they only get worse with each passing year.
for decades now
You should drop that S. The company has only existed for a little over 2 decades and Android hasn’t been around for much more than 1. Yes they’ve become an evil fucking corporation but let’s not exaggerate for how long.
I’ve been using Google since 1998, and everyone loved them because their search indexed sites quicker than others and the search results were more useful than the competition at the time like Yahoo and Altavista and AskJeeves. They started turning nasty as soon as they gained steam & commercial success with AdWords… around 2003-2004. So no, while they get worae each year they haven’t been ‘the good guys’ for decades.
In other words, they went public and must now maximize gains for shareholders.
boards of directors have a fiduciary duty to the shareholders. If they did something they knew wasn’t going to result in the max short term profits they can be found in violation. Just a race to the bottom.
So I guess for Firefox users it’s time to enable the resist fingerprinting option ? https://support.mozilla.org/en-US/kb/resist-fingerprinting
It annoys me that this is not on by default…
It’s a nice feature for those that actively enable it and know that it’s enabled, but not for the average user. Most people never change the default settings. Firefox breaking stuff by default would only decrease their market share even further. And this breaks so much stuff. Weird stuff. The average user wants a browser that “just works” and would simply just switch back to Chrome if their favourite website didn’t work as expected after installing Firefox. Chrome can be used by people who don’t even know what a browser is.
I’ve used this. The only annoyance is that all the on-screen timestamps remain in UTC because JS has no idea what timesone you’re in.
I get that TZ provides a piece of the fingerprint puzzle, but damn it feels excessive.
And automatic darkmode isn’t respected, and a lot of other little annoyances. That’s why this is so difficult. These are all incredibly useful features we would have to sacrifice for privacy.
Dark mode can be recreated using extensions, although the colors most likely won’t be as legible as “native support”.
I don’t see why a similar extrnsion couldn’t change the timezones of clocks.
Additionally, I don’t see why the server should bother with either (pragmatically) - Dark mode is just a CSS switch and timezones could be flagged to be “localized” by the browser. No need for extra bandwidth or computing power on the server end, and the overhead would be very low (a few more lines of CSS sent).
Of course, I know why they bother - Ad networks do a lot more than “just” show ads, and most websites also like to gobble any data they can.
Wait is that why my Firefox giving me errors when I try to log into websites with 2FA?
Please don’t enable this blindly. A lot of modern websites depend on a bunch of features which will simply not work with that flag enabled. Only do it, if you’re willing to compromise and debug things a bit
You can also use canvas blocker add-on.
Use their containers (firefox multi-account container add-on) feature and make a google container so that all google domains go to that container.
If you want to get crazy, in either set in about:config or make yourself a user.is file in your Firefox profile directory and eliminate all communication with google. And some other privacy tweaks below.
google shit and some extra privacy/security settings
Google domains and services:
user_pref(“browser.safebrowsing.allowOverride”, false);
user_pref(“browser.safebrowsing.blockedURIs.enabled”, false);
user_pref(“browser.safebrowsing.downloads.enabled”, false);
user_pref(“browser.safebrowsing.downloads.remote.block_dangerous”, false);
user_pref(“browser.safebrowsing.downloads.remote.block_dangerous_host”, false);
user_pref(“browser.safebrowsing.downloads.remote.block_potentially_unwanted”, false):
user_pref(“browser.safebrowsing.downloads.remote.block_uncommon”, false);
user_pref(“browser.safebrowsing.downloads.remote.enabled”, false);
user_pref(“browser.safebrowsing.downloads.remote.url”, “”);
user_pref(“browser.safebrowsing.malware.enabled”, false);
user_pref(“browser.safebrowsing.phishing.enabled”, false);
user_pref(“browser.safebrowsing.provider.google.advisoryName”, “”);
user_pref(“browser.safebrowsing.provider.google.advisoryURL”, “”);
user_pref(“browser.safebrowsing.provider.google.gethashURL”, “”);
user_pref(“browser.safebrowsing.provider.google.lists”, “”);
user_pref(“browser.safebrowsing.provider.google.reportURL”, “”);
user_pref(“browser.safebrowsing.provider.google.updateURL”, “”);
user_pref(“browser.safebrowsing.provider.google4.advisoryName”, “”);
user_pref(“browser.safebrowsing.provider.google4.advisoryURL”, “”);
user_pref(“browser.safebrowsing.provider.google4.dataSharingURL”, “”);
user_pref(“browser.safebrowsing.provider.google4.gethashURL”, “”);
user_pref(“browser.safebrowsing.provider.google4.lists”, “”);
user_pref(“browser.safebrowsing.provider.google4.pver”, “”);
user_pref(“browser.safebrowsing.provider.google4.reportURL”, “”);
user_pref(“browser.safebrowsing.provider.google4.updateURL”, “”);Privacy and security stuff:
user_pref(“dom.push.enabled”, false);
user_pref(“dom.push.connection.enabled”, false);user_pref(“layout.css.visited_links_enabled”, false);
user_pref(“media.navigator.enabled”, false);user_pref(“network.proxy.allow_bypass”, false);
user_pref(“network.proxy.failover_direct”, false);
user_pref(“network.http.referer.spoofSource”, true);user_pref(“security.ssl.disable_session_identifiers”, true);
user_pref(“security.ssl.enable_false_start”, false);
user_pref(“security.ssl.treat_unsafe_negotiation_as_broken”, true);
user_pref(“security.tls.enable_0rtt_data”, false);user_pref(“privacy.partition.network_state.connection_with_proxy”, true);
user_pref(“privacy.resistFingerprinting”, true);
user_pref(“privacy.resistFingerprinting.block_mozAddonManager”, true);
user_pref(“privacy.resistFingerprinting.letterboxing”, true);
user_pref(“privacy.resistFingerprinting.randomization.daily_reset.enabled”, true);
user_pref(“privacy.resistFingerprinting.randomization.enabled”, true);user_pref(“screenshots.browser.component.enabled”, false);
user_pref(“privacy.spoof_english”, 2);
user_pref(“webgl.enable-debug-renderer-info”, false); user_pref(“webgl.enable-renderer-query”, false);
This is why I like Lemmy, never knew canvas blocker was a thing. Thank you.
Or Mullvad Browser, which is just the Tor Browser without Tor.
There’s also IronFox on Android which is more similar to LibreWolf than MV Browser.
I’m still trying to wrap my head around fingerprinting, so excuse my ignorance. Doesn’t an installed plugin such as Canvas Blocker make you more uniquely identifiable? My reasoning is that very few people have this plugin relatively speaking.
Iirc, Websites can’t query addons unless those addons manipulate the DOM in a way that exposes themselves.
They can query extensions.
Addons are things installed inside the browser. Like uBlock, HTTPS Everywhere, Firefox Containerr, etc.
Extensions are installed outside the browser. Such as Flashplayer, the Gnome extensions installer, etc.
Further: the Canvas API doesn’t have any requirements on rendering accuracy.
By deferring to the GPU, font library, etc, tracking code can generate an image that is in most cases unique to your machine.
So blocking the Canvas API would return a 0. Which is less unique than what it would be normally.
Maybe if they can connect you to your other usage but it’s probably more of their resources and such a small % of the population that it isn’t worth the time to subvert? Idk just guessing here
I use (and love) Firefox containers, and I keep all Google domains in one container. However, I never know what to do about other websites that use Google sign in.
If I’m signing into XYZ website and it uses my Google account to sign in, should I put that website in the Google container? That’s what I’ve been doing, but I don’t know the right answer.
Yes, that’s right. Also seriously consider ditching Single
StalkSign On entirely.Thank you. I agree re ditching it and have been working on that.
Does ublock do this?
No
I mean it doesn’t hurt but as far as I can tell, it doesn’t actually block fingerprinting, it blocks domains known to collect and track your activity. The entire web is run on Google domains so that would be nearly impossible to block.
The crazy part about fingerprinting is that if you block the fingerprint data, they use that block to fingerprint you. That’s why the main strategy is to “blend in”.
The crazy part about fingerprinting is that if you block the fingerprint data, they use that block to fingerprint you. That’s why the main strategy is to “blend in”.
So, essentially the best way to actually resist fingerprinting would be to spoof the results to look more common - for example when I checked amiunique.org one of the most unique elements was my font list. But for 99% of sites you could spoof a font list that has the most common fonts (which you have) and no others and that would make you “blend in” without harming functionality. Barring a handful of specific sites that rely on having a special font, that might need to be set as exceptions.
No, the best way is to randomly vary fingerprinting data, which is exactly what some browsers do.
Font list is just one of a hundred different identifying data points so just changing that alone won’t do much.
I wasn’t suggesting it as “font list and you’re done”. I was using it as an example because it’s one where I’m apparently really unusual.
I would think you’d basically want to spoof all known fingerprinting metrics to be whatever is the most common and doesn’t break compatibility with the actual setup too much. Randomizing them seems way more likely to break a ton of sites, but inconsistently, which seems like a bad solution.
I mean hypothetically you could also set up exceptions for specific sites that need different answers for specific fields, essentially telling the site whatever it wants to hear to work but that’s going to be a lot of ongoing work.
It’s a combination of both.
Privacy Badger anyone?
But does privacy badger also act on the canvas APIs & cie. ?
Why does it do this?
- Math operations in JavaScript may report slightly different values than regular.
PS grateful for this option!
Some math functions have slightly different results depending on architecture and OS, so they fuzz the results a little. Here’s a tor issue discussing the problem: https://gitlab.torproject.org/legacy/trac/-/issues/13018
But one question I’ve been asking myself is : then, wouldn’t I be fingerprinted as one of the few nerds who activated the resist fingerprinting option?
Yes. But it’s better than being identified as a unique user which is much more likely without it. You can test it yourself on https://amiunique.org/fingerprint
Just use Tor browser if you want to blend in. Some sites will probably not work, and I don’t suggest accessing banks with it, but it works well for regular browsing.
But why would any browser accept access to those metadata so freely? I get that programming languages can find out about the environment they are operating in, but why would a browser agree to something like reading installed fonts or extensions without asking the user first? I understand why Chrome does this, but all of the mayor ones and even Firefox?
Because the data used in browser fingerprinting is also used to render pages. Example: a site needs to know the size of browser window to properly fit all design elements.
I fucking hate this. Let me zoom, stop reacting and centering omfg.
Just for an example that isn’t visible to the user: the server needs to know how it can communicate responses to the browser.
So it’s not just “what fonts do you have”, it also needs to know "what type of image can you render? What type of data compression do you speak? Can I hold this connection open for a few seconds to avoid having to spend a bunch of time establishing a new connection? We all agree that basic text can be represented using 7-bit ASCII, but can you parse something from this millennium?”.Beyond that there’s all the parameters of the actual connection that lives beneath http. What tls ciphers do you support? What extensions?
The exposure of the basic information needed to make a request reveals information which may be sufficient to significantly track a user.
Firefox has built-in tracking protection.
I know that it has that in theory, but my Firefox just reached a lower score on https://coveryourtracks.eff.org/ (which was posted in this threat, thanks!) than a Safari. Firefox has good tracking protection but has an absolute unique fingerprint, was 100% identifiable as the first on the site, as to Safari, which scored a bit less in tracking but had a not unique fingerprint.
Probably because Safari is default macOS and most people leave it at default settings. I doubt Apple is doing anything special here.
Apple is doing good on the privacy browser front because it makes the data they collect more valuable
It would be nice to hammer a manually created fingerprint into the browser and share that fingerprint around. When everyone has the same fingerprint, no one can be uniquely identified. Could we make such a thing possible?
Tor browser
And Mullvad browser
Not really. The “fingerprint” is not one thing, it’s many, e.g. what fonts are installed, what extensions are used, screen size, results of drawing on a canvas, etc… Most of this stuff is also in some way related to the regular operation of a website, so many of these can’t be blocked.
You could maybe spoof all these things, but some websites may stop behaving correctly.
I get that some things like screen resolution and basic stuff is needed, however most websites don’t need to know how many ram I have, or which CPU I use and so on. I would wish for an opt-in on this topics: So only make the bare minimum available and ask the user, when more is needed. For example playing games in the browser, for that case it could be useful to know how much ram is available, however for most other things it is not.
Unfortunately the bare minimum is in most cases already enough to uniquely fingerprint you.
This is called Tor
*Tor browse
Leave everything default and you’ll look like every other Tor browser user.
No it isn’t.
And this is really important. If you go on Google tracked websites without tor, Google will still know it’s you when you use tor, even if you’ve cleared all your cookies.
Tor means people don’t know your IP address. It doesn’t protect against other channels of privacy attack.
Yes, it is… Tor prevents against fingerprinting as well. It isn’t just relay plumbing to protect your IP… This can easily be tested on any fingerprinting site with default config of Tor demonstrating a low entropy https://blog.torproject.org/browser-fingerprinting-introduction-and-challenges-ahead/
No, it is not. Tor Browser != Tor. Get your shit right or be pwned.
It’s been a long while since I looked, but I remember it being a thing in tails to specifically not resize your browser window or only have it full screen to match a ton of other fingerprints.
Plus since it was a live distro that reset on every reboot it would only have the same fonts and other data as other people using tails. Honestly, I hate that all that info is even available to browsers and web sites at all.
Letterboxing has significantly reduced threat presented by window sizing. https://support.torproject.org/glossary/letterboxing/
I don’t quite understand – does this feature let you resize the window again to the size you want, and you are still sharing the same fingerprint with everyone else? Or do you still have to keep the browser window the default size to minimize your unique fingerprint?
Tor browser is not Tor.
This is Tor https://en.m.wikipedia.org/wiki/Tor_(network)
Tor browser is an additional piece of software built on top of it. Using the network(what everyone else means when they say tor) is unfortunately not enough to prevent fingerprinting.
Good point, that difference does matter. I guess other browsers like Brave use the Tor Network, and it would be misleading to suggest Brave has good anti-fingerprinting.
What kind of fingerprint avoidance are you suggesting then that the Tor browser cannot do that makes a difference?
If you enable JavaScript, you open Pandora’s box to fingerprinting (e.g. tracking mouse movements, certain hardware details, etc). If you don’t, half (or more) of the internet is unusable.
PiHole
AdAway
Burn the ads down.
Sadly, neither will truly protect you from fingerprinting.
Like, why not? The article says:
“And this is exactly why Google wants to use digital fingerprinting: It is way more powerful than cookie-based tracking, and it can’t be blocked for instance by switching to a privacy-first browser.”
If I use Firefox and Firefox doesn’t send any fingerprint to the website, then how is it identifying me?
I get that if you use Android (which is normally tied to Google), you’re still subject to see it on Google websites, but how will it work otherwise?
This website explains it: https://pixelprivacy.com/resources/browser-fingerprinting/
Basically you send your user agent, browser and OS configuration like screen resolution, your primary system language, timezone, installed plugins and so forth as you browse the internet. Not so easy to block. In fact, avoiding fingerprinting 100% is almost impossible, because there are so many configurations. It is hard not be somewhat unique. Still there are ways to minimize the identifying information. Using Firefox, this is what you might want to read: https://support.mozilla.org/en-US/kb/resist-fingerprinting. Note, though, that even there it says that such techniques can “help prevent websites from uniquely identifying you”, not prevent it entirely.
They can block domains known to collect fingerprinting data but yes, they don’t block fingerprinting itself.
When you go to The Verge and there’s a full-screen pop-up about “our 872 partners store and access personal data, like browsing data or unique identifiers” those are all databrokers, and it’s not just them, it’s a fucking epidemic on the internet of sites that sell user data. The web has a cancer and it’s called advertising.
deleted by creator
PopUpOff gets rid of the box on most sites without having to give your consent. Can’t remember the last time an annoying cookie disclaimer blocked me from web content.
I wasn’t complaining about annoying cookie banners, I was complaining about data collection.
You can get rid of cookie banners with a normal ad blocker like uBO
Further evidence that a Republican government in the USA results in private organisations pushing the bar as far as they can.
In Reagan’s time it was Wall Street. Now it’s Silicon Valley.
You want private organisations working for your benefit and not that of their shareholders? You need a government that actually has the gumption to challenge them. The current US government is 4 years of a surrender flag flying on the white house.
Or we could bin off this fucking failed neoliberal experiment, but that’s apparently a bit controversial for far too many people
Republicans aren’t the problem here, they’re a natural result of a two party system. If you have a coin, half the time you’ll get the “good” side, and half the time you’ll get the “bad.”
And this isn’t to say either side is consistently “good” or “bad,” parties rarely stick anything. The deregulation you’re complaining about started under Jimmy Carter, affectionately called “the great deregulator.” In fact, many (most?) of Carter’s changes took effect during Reagan’s term, and it was incredibly successful.
However, for some reason Democrats are now against deregulation, probably because Republicans took the credit and Democrats needed to rebrand.
That doesn’t imply that Trump’s deregulation is “good,” it just means deregulation isn’t inherently “bad.”
Having the gall to suggest we not allow less than 3000 people to own all of the worlds supply lines, media platforms, institutional wealth, construction companies, dissemination platforms, politicians, private equity firms and the single largest interconnected (private or otherwise) espionage and social engineering plot known to mankind?
You fucking tanky you! Go back to Russia!!!
Digital fingerprinting is a method of data collection – one that in the past has been refused by Google itself because it “subverts user choice and is wrong.” But, we all remember that Google removed “Don’t be evil” from its Code of Conduct in 2018. Now, the Silicon Valley tech giant has taken the next step by introducing digital fingerprinting.
Oh, forgot to mention - we’re evil now. Ha! Okay, into the chutes.
Google removed “Don’t be evil”
Still parading that lie around? It’s easily verified as false. Their code of conduct ends with:
And remember… don’t be evil, and if you see something that you think isn’t right – speak up!
Still parading that lie around? It was removed and then added back later.
It was removed and then added back later
Really? Because the articles that noticed it back then said it was retained at the end of the document, it was only removed from the preface:
https://www.searchenginejournal.com/google-dont-be-evil/254019/
https://gizmodo.com/google-removes-nearly-all-mentions-of-dont-be-evil-from-1826153393
Time for meshnet?