I think people are misunderstanding. This isn't CT logs, its a wildcard certificate so it wouldn't leak the "nas" part. It's sentry catching client-side traces and calling home with them, and then picking out the hostname from the request that sent them (ie, "nas.nothing-special.whatever.example.com") and trying to poll it for whatever reason, which is going to a separate server that is catching the wildcard domain and being rejected.
Sounds like a great way to get sentry to fire off arbitrary requests to IPs you don’t own.
sure hope nobody does that targeting ips (like that blacklist in masscan) that will auto report you to your isp/ans/whatever for your abusive traffic. Repeatedly.
> The poster described how she was able to retrieve her car after service just by giving the attendant her last name. Now any normal car owner would be happy about how easy it was to get her car back, but someone with a security mindset immediately thinks: “Can I really get a car just by knowing the last name of someone whose car is being serviced?”
Just a couple of hours ago, I picked my car up from having its obligatory annual vehicle check. I walked past it and went into their office, saying "I'm here to pick up my car". "Which one is it?" "The Golf" "Oh, the $MODEL?" (it was the only Golf in their car park) "Yeah". And then after payment of £30, the keys were handed over without checking of anything, not even a confirmation of my surname. This was a different guy to the one who was in there an hour earlier when I dropped the car off.
I feel like that car security situation also is sort of setup to tell us about how folks with a security mindset can go overboard?
Some car dealership who never had a car stolen hires a consultant and they identify this pickup situation as a problem. Then they implement some wild security and now customers who just dropped off their car, just talked to the same customer service person about the weather ... have to go through some extra security to impersonally prove who they are, because someone imagined a problem that has never occurred (or nearly never). But here we go doing the security dance because someone imagined a problem that really has nothing to do with how people actually steal cars...
Computers and the internet are different of course, the volume of possibilities / bad actors you could be exposed to are seemingly endless. Yet even there security mindset can go overboard.
I'm currently trying to recover/move some developer accounts for some services because we had someone leave the company less than gracefully. Often I have my own account, it's part of an organization ... but moving ownership is an arduous and bizarrely different process for each company. I get it, you wouldn't want someone to take over our no name organization, but the process all seem to involve extra steps piled on "for security". The fact that I'm already a customer, have an account in good standing, part of the organization, the organization account holder has been inactive ... doesn't seem to matter at all, I may as well be a stranger from the outside, presumably because of "security".
It certainly feels that way here in 2026. It seems like I'm spending so much time "verifying" and "authenticating" and clicking somewhere so that the service can send me a code in E-mail. And more and more services are getting super aggressive. Biometrics, 2FA, uploading government ID, uploading face scans... Good grief!
I can imagine being in info-sec is a rough life. When you get breached, they're blamed. So they spend all their time red-teaming and coming up with outlandish ways that their systems can be compromised, and equally outlandish hoops for users to jump through just to use their product. So the product gets all these hoops. And then an attacker gets even more creative, breaches you again, and now your product has horrible UX + you're still getting breached.
The way so-called ‘2fa’ has been implemented on 90% of the things I interact with as a consumer is an absolute farce. Control of a SIM is nearly 100% of the time sufficient to get absolute control of any account, and showing a $50 fake ID to a teenager at a cell phone store has probably a 99% success rate. Only sites for nerds, plus Google and Microsoft, support TOTP or passkeys. Everywhere else uses the sms BS for 2fa or often effectively 1fa if it can be used to reset the first factor. And these same idiots lecture you for your 100-character password for not containing “at least one of these SIX “special characters”, an upper, a lower, and a digit. `Password1!` is a suitable password to these systems.
On the flip side... I can't tell you how many times I've had to explain how public/private key crypto works do developers and IT security staff working in government projects. And this is just for one-way trust of JWTs for SSO integrations.
I mean, I don't mind if the same dev public-keys are used nearly everywhere in internal dev and testing environments... but JFC, don't deploy them to client infrastructure for our apps.
FWIW, aside... for about the last decade, I generally separate auth from the application I'm working with, relying on a limited set of established roles and RSA signed JWTs, allowing for the configuration of one or more issuers. This allows for a "devauth" that you can run locally for a whoever you want usage. While more easily integrating into other SSO systems and bridges with other auth services/systems in differing production environments. Even with firm SSO/Ouath, etc services, it's still the gist of configuration.
And then some person realizes that government ids can be faked, so they set up a system of doing a retinal scan of the person dropping off the car and then comparing it to the retinal scan of the person picking it up.
Then they realize that one person may be bribed so they require at least two people to verify at pickup and drop off.
Meanwhile, a car has never ever been stolen this way.
It’s a risk/reward scenario, and an example of security minded people chasing ghosts.
The likelihood of conmen stealing VW Golfs from repair shops is a really low risk/high impact event. So they could demand your passport and piss you off or have you leave a happy customer.
In the remote chance the con artist strikes, it’s a general liability covered by insurance.
The difference is that car theft is still prosecuted by police, where as cybercrime is not (unless you embarrass a huge corporation).
So the garage can have lower security because even potential thieves do a risk/reward calculation and the vast majority choose not to proceed with it.
Online, the risk/reward calculation is different (what risk?), so more people will be tempted to try (even for the lolz - not every act of cybercrime is done for monetary purposes).
The fact that so many things in the world work like this is the reason for the continued appeal of heist movies. Those always contain clever bits of social engineering and confidence scams which move the plot along - and they are as believable today as they always were.
It's even easier than that. A lot of older ignition locks could be defeated by a screwdriver so you just smash the window, jimmy the ignition lock with the screw driver and off you go! There was a specific model of jeep that was stolen a lot because the rear lock could be popped out easily with pliers, a matching key made, and you return later with the key to steal the car.
You'd have to be stupid and desperate to steal from a garage.
The people who work there aren't office workers; you've got blue collar workers who spend all day working together and hanging out using heavy equipment right in the back. And they're going to be well acquainted with the local tow truck drivers and the local police - so unless you're somewhere like Detroit, you better be on your way across state lines the moment you're out of there. And you're not conning a typical corporate drone who sees 100 faces a day; they'll be able to give a good description.
And then what? You're either stuck filing off VINs and faking a bunch of paperwork, or you have to sell it to a chop shop. The only way it'd plausibly have a decent enough payoff is if you're scouting for unique vehicles with some value (say, a mint condition 3000GT), but that's an even worse proposition for social engineering - people working in a garage are car guys, when someone brings in a cool vehicle everyone's talking about it and the guy who brought it in. Good luck with that :)
Dealership? Even worse proposition, they're actual targets so they know how to track down missing vehicles.
If you really want to steal a car via social engineering, hit a car rental place, give them fake documentation, then drive to a different state to unload it - you still have to fake all the paperwork, and strip anything that identifies it as a rental, and you won't be able to sell to anyone reputable so it'll be a slow process, and you'll need to disguise your appearance differently both times so descriptions don't match later. IOW - if you're doing it right so it has a chance in hell of working, that office job starts to sound a whole lot less tedious.
Stolen cars are often sold for low amounts of money - like $50 - and then used to commit crimes that are not traceable from their plates. It hasn't really been possible to steal and resell a car in the United States for many years, barring a few carefully watched loopholes (Vermont out-of-state registrations is one example that was recently closed).
When Kia and Hyundai were recently selling models without real keys or ignition interlocks, that was the main thing folks did when they stole them.
In Canada there's been a big problem with stolen cars lately. Mostly trucks, and other high value vehicles though. Selling them locally isn't feasible, but there's a criminal organization that's gotten very good at getting them on container ships and out to countries that don't care if the vehicles are stolen. So even with tracking, there's nothing people can do. Stopping it at the port is the obvious fix, but somehow that's not what is being done. Probably bribery to look the other way.
Yeah, the only way to do it would be a cash transaction where you'd have to forge a legitimate looking title/registration and pass it off to a naive buyer. So it's still technically possible, but not in any kind of remotely scalable way.
I reckon it is infinitely riskier to be caught attempting to break into a car than it is to just walk in to a service garage and pretending you own the Vdub in the parking lot. There is still a bit of deniability in the 2nd option but good luck explaining to the police why you are using a set of tools specifically for picking vehicle locks (because you can't just use regular pick and tension wrenches) to break into a vehicle that you don't own.
> This kind of thinking is not natural for most people. It’s not natural for engineers. Good engineering involves ...
I have to disagree in the strongest terms. It doesn't matter what it is, the only way to do a good job designing something is to imagine the ways in which things could go wrong. You have to poke holes in your own design and then fix them rather than leaving it to the real world to tear your project to shreds after the fact.
The same thing applies to science. Any even half decent scientist is constantly attempting to tear his own theories apart.
I think Schneier is correct about that sort of thinking not being natural for your typical person. But it _is_ natural (or rather a prerequisite) for truly competent engineers and scientists.
hmmm I am 50% with you. Imho to be an amazing engineer is to see a problem and find a good(whatever good means) solution. Beeing a good scientist is asking precise questions and finding experiments validating them.
I think its more the nuanced difference between safety and security. Engineers build things so they run safe.
For example building a roof that doesnt collapse is a safe roof.
Is the roof secure? Maybe I can put thermites in the wood...
this is the difference. Safety is no harm done from the thing itself Engineers build and security is securing the thing from harm from outside.
That is true, but security is similarly subject to the need to constrain threat models to those that are relevant. The scientist doesn't need to worry about mass production, the engineer (in most cases) doesn't need to worry about someone taking a chain saw to it.
Security will have a wider scope by default (unlike natural phenomena, attacks are motivated and can get pretty creative after all) but there will still be some boundary outside of which "not my problem" applies. Regardless, it's the same fundamental thought pattern in use. Repeatedly asking "what did I overlook, what unintended assumptions did I make, how could this break".
That said, admittedly by the time you make it to the scale of Google or Microsoft and are seriously considering intelligence agencies as adversaries the sky is the limit. But then the same sort of "every last detail is always your problem" mentality also applies to the engineers and software developers building things that go to space (for example).
Hostnames are not private information. There are too many ways how they get leaked to the outside world.
It can be useful to hide a private service behind a URL that isn't easy to guess (less attack surfaces, because a lot of attackers can't find the service). But it needs to be inside the URL path, not the hostname.
In the first example the name is leaked with DNS queries, TLS certificates and many other possibilities. In the second example the secret path is only transmitted via HTTPS and doesn't leak as easy.
Marginally better for sure but in this case the path would also have been "leaked" to the sentry instance owned by developers of the the NAS device phoning home. This can happen in zillions of ways and is a good reason to use relatively opaque urls in generally and not "friendly ids" and generally being careful abou putting secrets in URLs.
Is "clown GCP Host" a technical term I am unaware of, or is the author just voicing their discontent?
Seems to me that the problem is the NAS's web interface using sentry for logging/monitoring, and part of what was logged were internal hostnames (which might be named in a way that has sensitive info, e.g, the corp-and-other-corp-merger example they gave. So it wouldn't matter that it's inaccessible in a private network, the name itself is sensitive information.).
In that case, I would personally replace the operating system of the NAS with one that is free/open source that I trust and does not phone home. I suppose some form of adblocking ala PiHole or some other DNS configuration that blocks sentry calls would work too, but I would just go with using an operating system I trust.
No it's because lots of stuff is duct taped together and then you have tons of scripts or tooling that was someone's weekend project (to make their oncall burden easier) that they shared around. Usually there'll be a flag like --clowntown or --clowny-xyz when it's obvious to all parties involved that it's destined to destroy everything one day but YOLO (also a common one).
Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.
You may not owe clown-resemblers better, but you owe this community better if you're participating in it.
We ban accounts that keep posting in this sort of pattern, as yours has, so if you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, we'd appreciate it.
I have contributed a lot and yet have a lot to offer this community. I am not doing anything which violates the rules and I am being generous with my interpretation. Remarking on Zuckerberg and other evil people like the pieces of shit they are, is a legitimate and kind way to interact in this community. I know this because I run a hacker collective and it's common knowledge there -- we are all HN users too.
Thank you for your support and encouragement through the many years I have been here.
I remember the term "clown computing" to describe "cloud computing" from IRC earlier than 2016
I use a localhost TLS forward proxy for all TCP and HTTP over the LAN
There is no access to remote DNS, only local DNS. I use stored DNS data periodically gathered in bulk from various sources. As such, HTTP and other traffic over TCP that use hostnames cannot reach hosts on the internet unless I allow it in local DNS or the proxy config
For me, "WebPKI" has proven useful for blocking attempts to phone home. Attempts to phone home that try to use TLS will fail
I also like adding CSP response header that effectively blocks certain Javascript
It sounds like the blog author gave the NAS direct access to the internet
Every user is different, not everyone has the same preferences
> It sounds like the blog author gave the NAS direct access to the internet
FTFA:
Every time you load up the NAS [in your browser], you get some clown GCP host knocking on your door, presenting a SNI hostname of that thing you buried deep inside your infrastructure. Hope you didn't name it anything sensitive, like "mycorp-and-othercorp-planned-merger-storage", or something.
Around this time, you realize that the web interface for this thing has some stuff that phones home, and part of what it does is to send stack traces back to sentry.io. Yep, your browser is calling back to them, and it's telling them the hostname you use for your internal storage box. Then for some reason, they're making a TLS connection back to it, but they don't ever request anything. Curious, right?
This is when you fire up Little Snitch, block the whole domain for any app on the machine, and go on with life.
I disagree with your conclusion. The post speaks specifically about interactions with the NAS through a browser being the source of the problem and the use of an OSX application firewall program called Little Snitch to resolve the problem. [0] The author's ~fifteen years of posts demonstrate that she is a significantly accomplished and knowledgeable system administrator who has configured and debugged much trickier things than what's described in the article.
It's not impossible that the source of the problem has been misidentified... but it's extremely unlikely. Having said that, one thing I do find likely is that the NAS in question is isolated from the Internet; that's just a smart thing that a savvy sysadmin would do.
[0] I find it... unlikely that the NAS in question is running OSX, so Little Snitch is almost certainly running on a client PC, rather than the NAS.
> Is "clown GCP Host" a technical term I am unaware of, or is the author just voicing their discontent?
The term has been in use for quite some time; It is voicing sarcastic discontent with the hyperscaler platforms _and_ their users (the idea being that the platform is "someone else's computer" or - more up to date - "a landlord for your data"). I'm not sure if she coined it, but if she did then good on her!
Not everyone believes using "the cloud" is a good idea, and for those of us who have run their own infrastructure "on-premises" or co-located, the clown is considered suitably patronising. Just saying ;)
> the idea being that the platform is "someone else's computer"
I have a vague memory of once having a userscript or browser extension that replaced every instance of the word "cloud" with "other peoples' computers". (iirc while funny, it was not practical, and I removed it).
fwiw I agree and I do not believe using "the cloud" for everything is a good idea either, I've just never heard of the word "clown" being used in this way before now.
I remember ridiculing "cloud computing" by calling it "clown computing" decades ago. It's pretty old and well established snark-jargon, like spelling Micro$oft with a dollar sign.
Stuff like this is why I consider uBlock Origin to be the bare minimum security software for going on the web. The amount of 3rd party scripts running on most pages, constantly leaking data to everybody listening, is just mind boggling.
It's treating a symptom rather than a disease, but what else can we do?
I also have taken to using adguard home on the router. It blocks 15 or 20 percent of all my traffic. It's quite scary how bad the tracking and other nasties has become.
Only way I can think of protecting against this is to put a reverse proxy in front of it, like Nginx, and inject CSP headers to prevent cross site requests. Wouldn't block the NAS server side from making external calls, but would prevent your browser doing it for them as is the case here. Also would prevent stuff like Google Analytics if they have it. If you set up a proxy, you could also give it a local hostname like nas.local or something with a cert signed by your private CA that Nginx knows about, and then point the real hostname at Nginx, which has the wildcard cert.
Bit of a pain to set this all up though. I run a number of services on my home network and I always stick Nginx in front with a restrictive CSP policy, and then open that policy up as needed. For example, I'm running Home Assistant, and I have the Steam plugin, which I assume is responsible for requests from my browser like for: https://avatars.steamstatic.com/HASH_medium.jpg, which are being blocked by my injected CSP policy
P.S. I might decide to let that steam request through so I can see avatars in the UI. I also inject "Referrer-Policy: no-referrer", so if I do decide to do that, at least they wont see my HA hostname in there logs by default.
I bought a SynologyNAS and I have regretted already 3-4 times. Apart from the software made available from the community, there is very little one can do with this thing.
Using LE to apply SSL to services? Complicated. Non standard paths, custom distro, everything hidden (you can’t figure out where to place the ssl cert of how to restart the service, etc). Of course you will figure it out if you spent 50 hours… but why?
Don’t get me started with the old rsync version, lack of midnight commander and/or other utils.
I should have gone with something that runs proper Linux or BSD.
Unless you know what you are walking into ahead of time I would not recommend Synology to someone who wants to host a bunch of stuff and also wants a NAS. I don’t touch any of the container/apps stuff on my Synology(s), they are simply file servers for my application server. For this purpose, I find Synology rock solid and I’ve been very happy with them.
That said, I’ll probably try out the UniFi NAS offerings in the near future. I believe Synology has semi-walked-back its draconian hard drive policy but I don’t trust them to not try that again later. And because I only use my Synology as a NAS I can switch to something else relatively easily, as long as I can mount it on my app server, I’m golden.
NAS is the primary function. But yes, I want full linux server that I can decide what to install and which protocol to use to upload and/or download files.
I bought Synology RS217 for $100 last year and it's the best tech purchase I made in years. The software it comes with is the best web interface I experienced in years. The simplicity, stability and attention to detail reminds me of old macs. I have macmini as application server and did not expect to use Synology for anything but file storage / replication. However it comes with a great torrent client that I use all the time now. We also use Synology Office instead of google docs now. It exceeded all my expectations and when it dies, I will immediately buy one of the new rack stations they offer.
You can run a container on Synology and install your custom services, tools there. At least that is what I do. For custom kernel modules you still need a Synology package for something like Wireguard.
If you have OPNSense, it has an ACME plugin with Synology action. I use that to automatically renew and push a cert to the NAS.
That said, since I like to tinker, Synology feels a bit restricted, indeed. Although there is some value in a stable core system (like these immutable distros from Fedora Atomic).
I have a fairly recent DS920+ and never had issues with containers - I have probably 10+ containers on it - grafana, victoriametrics/logs, jellyfin, immich with ML, my custom ubuntu toolboxes for net, media, ffmpeg builds, gluetun for vpn, homeassistant, wallabag,...
Edit: I just checked Grafana and cadvisor reports 23 containers.
Edit2: 4.4.302+ (2022) is my kernel version, there might be specific tools that require more recent kernels, of course, but I was so far lucky enough to not run into those.
> there is very little one can do with this thing.
It has a VMM and Docker. Entware / opkg exist for it. There's very little that can't be done, but expecting to use an appliance that happens to be Linux-based as a generic Linux server is going to lead to challenges. Be it Synology, TrueNAS, or anything else.
I personally have been blocking sentry and all relevant domains on my machines.
I understand this is not a generally applicable advice. For me that’s the right choice
Having recently set up sentry, at least one of the ways they use this is to auto-configure uptime monitoring.
Once they know what hosts you run, it'll ping that hostname periodically. If it stays up and stable for a couple days, you'll get an alert in product: "Set up uptime monitoring on <hostname>?"
Whether you think this is valid, useful, acceptable, etc. is left as an exercise to the reader.
Reverse address lookup servers routinely see escaped attempts to resolve ULA and rfc1918. If you can tie the resolver to other valid data, you know inside state.
Public services see one way (no TCP return flow possible) from almost any source IP. If you can tie that from other corroborated data, the same: you see packets from "inside" all the time.
Darknet collection during final /8 run-down captured audio in UDP.
I have investigated similar situation on Heroku. Heroku assigns a random subdomain suffix for each new app, so URLs of apps are hard to guess and look like this: test-app-28a8490db018.herokuapp.com.
I have noticed that as soon as a new Heroku app is created, without making any requests to the app that could leak the URL via a DNS lookup, the app is hit by requests from automatic vulnerability scanning tools. Heroku confirmed that this is due the new app URL being published in certificate authority logs, which are actively monitored by vulnerability scanners.
> certificate authority logs, which are actively monitored by vulnerability scanners
That sounds like a large kick-me sign taped to every new service. Reading how certificate transparency (CT) works leads me to think that there was a missed opportunity to publish hashes to the logs instead of the actual certificate data. That way a browser performing a certificate check can verify in CT, but a spammer can't monitor CT for new domains.
This applies only to Heroku Fir and Cedar apps (apps that run in Heroku Private Spaces). Heroku Common Runtime apps still use shared wildcard certificate and their domains are not discoverable like this.
Isn't the article over emphasising a little bit on leakage of internal urls ?
Internal hostnames leaking is real, but in practice it’s just one tiny slice of a much larger problem: names and metadata leak everywhere - logs, traces, code, monitoring tools etc etc.
DNS naming rules for non-Unicode are letters, numbers, and hyphens only, and the hyphens can't start or stop the domain. Unicode is implemented on top of that through punycode. It's possible a series of bugs would allow you to punycode some sort of injection character through into something but it would require a chain of faulty software. Not an impossibly long chain of faulty software by any means, but a chain rather than just a single vulnerability. Punycode encoders are supposed leave ASCII characters as ASCII characters, which means ASCII characters illegal in DNS can't be made legal by punycoding them legally. I checked the spec and I don't see anything for a decoder rejecting something that jams one in, but I also can't tell if it's even possible to encode a normal ASCII character; it's a very complicated spec. Things that receive that domain ought to reject it, if it is possible to encode it. And then it still has to end up somewhere vulnerable after that.
Rules are just rules. You can put things in a domain name which don't work as hostnames. Really the only place this is enforced by policy is at the public registrar level. Only place I've run into it at the code level is in a SCADA platform blocking a CNAME record (which followed "legal" hostname rules) pointing to something which didn't. The platform uses jython / python2 as its scripting layer; it's java; it's a special real-time java: plenty of places to look for what goes wrong, I didn't bother.
People should know that they should treat the contents of their logs as unsanitized data... right? A decade ago I actually looked at this in the context of a (commercial) passive DNS, and it appeared that most of the stuff which wasn't a "valid" hostname was filtered before it went to the customers.
This is exactly why I have a number of "appliances" which never get clown updates: have addresses in a subnet I block at the segment edge, have DNS which never answers, and there are a few entries in the "DNS firewall" [0] (RPZ) which mostly serve as canaries.
This is the problem with the notion that "in the name of securitah IoT devices should phone home for updates": nobody said "...and map my network in the name of security"
[0] Don't confuse this with Rachel's honeypot wildcarding *.nothing-special.whatever.example.com for external use.
From what I understand, sentry.io is like a tracing and logging service, used by many organizations.
This helps you (=NAS developer) to centralize logs and trace a request through all your application layers (client->server->db and back), so you can identify performance bottlenecks and measure usage patterns.
This is what you can find behind the 'anonymized diagnostics' and 'telemetry' settings you are asked to enable/consent.
For a WebUI it is implemented via javascript, which runs on the client's machine and hooks into the clicks, API calls and page content. It then sends statistics and logs back to, in this case, sentry.io. Your browser just sees javascript, so don't blame them. Privacy Badger might block it.
It is as nefarious as the developer of the application wants to use it. Normally you would use it to centralize logging, find performance issues, and get a basic idea on what features users actually use, so you can debug more easily. But you can also use it to track users.
And don't forget, sentry.io is a cloud solution. If you post it on machines outside your control, expect it to be public. Sentry has a self-hosted solution, btw.
My employer uses Sentry for (backend) metrics collection so I had to unblock it to do my job. I wish Sentry would have separate infra for "operating on data collected by Sentry" and "submit every mouse click to Sentry" so I could block their mass surveillance and still do my job, but I suppose that would cut into their profit margins.
My current solution is a massive hack that breaks down every now and then.
Most organizations I've set Sentry up for tunnel the traffic through their own domain, since many blocking extensions block sentry requeats by default. Their own docs recommend it as well. All that to say, it's not trivial to fully block it and you were probably sending telemetry anyway even with the domain blocked.
With the right tricks (CNAME detection, URL matching) a bunch of ad blocking tools still pick up the first-party proxies, but that only works when directly communicating with the Sentry servers.
Quite a pain that companies refuse to take no for an answer :/
Oh god this sucks, i've been setting up lots of services on my NAS pointing to my own domains recently. Can't even name the domains on my own damn server with an expectation of privacy now.
The (somewhat affordable) productized NASes all suffer from big tech diseases.
I think a lot of people underestimate how easy a "NAS" can be made if you take a standard PC, install some form of desktop Linux, and hit "share" on a folder. Something like TrueNAS or one of its forks may also be an option if you're into that kind of stuff.
If you want the fancy docker management web UI stuff with as little maintenance as possible, you may still be in the NAS market, but for a lot of people NAS just means "a big hard drive all of my devices can access". From what I can tell the best middle point between "what the box from the store offers" and "how do build one yourself" is a (paid-for) NAS OS like HexOS where analytics, tracking, and data sales are not used to cover for race-to-the-bottom pricing.
Actually I host everything on a linux PC/server, but a different box runs PFSense and a local DNS resolver so I was talking about setting up a split-brain DNS there. So I don't have to manually edit the hosts file on every machine and keep it up to date with IP changes. Personally I really like docker compose, its made running the little homeserver very easy.
Personally, I've started just using mDNS/Bonjour for local devices. Comes preinstalled on most devices (may need a manual package on BSD/Linux servers) and doesn't require any configuration. Just type in devicename.local and let the network do the rest. You can even broadcast additional device names for different services, so you don't need to do plex.nas.local, but can just announce plex.local and nas.local from the same machine.
There's a theoretical risk of MitM attacks for devices reachable over self-signed certificates, but if someone breaks into my (W)LAN, I'm going to assume I'm screwed anyway.
I've used split-horizon DNS for a couple of years but it kept breaking in annoying ways. My current setup (involving the pihole web UI because I was sick of maintaining BIND files) still breaks DNSSEC for my domain and I try to avoid it when I can.
A bunch of out-of-the-box NAS manufacturers provide a web-based OS-like shell with file managers, document editors, as well as an "app store" for containers and services.
I see the traditional "RAID with a SMB share" NAS devices less and less in stores.
If only storage target mode[1] had some form of authentication, it'd make setting up a barebones NAS an absolute breeze.
Storage target mode is block-level, not filesystem-level, meaning it won't support concurrent access and any network hiccup or dropped connection will leave the filesystem in an unclean state.
> ...any network hiccup or dropped connection will leave the filesystem in an unclean state.
Given that the docs claim that this is an implementation of an official NVMe thing, I'd be very surprised if it had absolutely no facility for recovering from intermittent network failure. "The network is unreliable" [0] is axiom #1 for anyone who's building something that needs to go over a network.
If what you report is true, then is the suckage because of SystemD's poor implementation, or because the thing it's implementing is totally defective?
[0] Yes, datacenter (and even home) networks can be very reliable. They cannot be 100% reliable and -in my professional experience- are substantially less than 100% reliable. "Your disks get turbofucked if the network ever so much as burps" is unacceptable for something you expect people to actually use for real.
The real trick, and the reason I don't build my own NAS, is standby power usage. How much wattage will a self built Linux box draw when it's not being used? It's not easy to figure out, and it's not easy to build a NAS optimized for this.
Whereas Synology or other NAS manufacturers can tell me these numbers exactly and people have reviewed the hardware and tested it.
To me, it's a question of time and money efficiency. (Time is money.)
I can buy a NAS, whereby I pay money to enjoy someone else's previous work of figuring it out. I pay for this over and over again as my needs change and/or upgrades happen.
Or
I can build a NAS, whereby I spend time to figure it out myself. The gained knowledge that I retain in my notes and my tiny little pea brain gets to be used over and over again as needs change, and/or upgrades happen. And -- sometimes -- I even get paid to use this knowledge.
There are power meters like KWS-303L that will tell you how much manufacturers lie with their numbers.
For example my ancient tplink TL-WR842N router eats 15W standby or no, while my main box, fans, backlight, gpu, hdds and stuff -- about 80W idle.
Looking at Synology site the only power I see there is the psu rating, which is 90W for DS425. So you can expect real power consumption of about 30-40W. Which is typical for just about any NUC or a budget ATX motherboard with a low-tier AMD-something + a bunch of HDDs.
However, domains and host names were not designed to be particularly private and should not be considered secret, many things don't consider them private, so you should not put anything sensible in a host name, even in a network that's supposed private. Unless your private network is completely air-gapped.
Now, I wouldn't be surprised that hostnames were in fact originally expected to be explicitly public.
You don't need any auth to send an email from your domain, or in fact from any domain. Just set whatever `From` you want.
I've received many emails from `root@localhost` over the years.
Admittedly, most residential ISPs block all SMTP traffic, and other email servers are likely to drop it or mark it as spam, but there's no strict requirement for auth.
> Admittedly, most residential ISPs block all SMTP traffic, and other email servers are likely to drop it or mark it as spam, but there's no strict requirement for auth.
Source? I've never seen that. Nobody could use their email provider of choice if that was the case.
They don't do DPI, they just look at the destination port.
And that's why there's a separate port for submission to mail agents where such auth is expected and thus only outbound mail is typically even attempted to be submitted to.
Technically local delivery mail too, e.g. where the From and the To headers are valid and have the same domain.
AT&T says "port 25 may be blocked from customers with dynamically-assigned Internet Protocol addresses", which is the majority of customers https://about.att.com/sites/broadband/network
What ISP are you using that isn't blocking port 25, and have you never had the misfortune of being stuck with comcast or AT&T as your only option?
I opened it on an old computer with an old linux distro with an old browser because old linux distros have reliable and working accessibility features like screen readers and good non-gpu text to speech and advanced keyboard/mouse sharing. Modern linux distros do not. Don't worry, I have javascript execution/etc turned off by default on that machine.
Not sure why they made the connection to sentry.io and not with CT logs. My first thought was that "*.some-subdomain." got added to the CT logs and someone is scanning *. with well known hosts, of which "nas" would be one. Curious if they have more insights into sentry.io leaking and where does it leak to...
But she mentioned: 1) it isn't in DNS only /etc/hosts and 2) they are making a connection to it. So they'd need to get the IP address to connect to from somewhere as well.
> You're able to see this because you set up a wildcard DNS entry for the whole ".nothing-special.whatever.example.com" space pointing at a machine you control just in case something leaks. And, well, something did* leak.
They don't need the IP address itself, it sounds like they're not even connecting to the same host.
Unless she hosts her own cert authority or is using a self-signed cert, the wildcard cert she mentions is visible to the public on sites such as https://crt.sh/.
Because sentry.io is a commercial application monitoring tool which has zero incentive to any kind of application monitoring on non-paying customers. That's just costs without benefits.
You now have to argue that a random third party is using and therefore paying sentry.io to do monitoring of random subdomains for the dubious benefit of knowing that the domain exists even though they are paying for something that is way more expensive.
It's far more likely that the NAS vendor integrated sentry.io into the web interface and sentry.io is simply trying to communicate with monitoring endpoints that are part of said integration.
From the perspective of the NAS vendor, the benefits of analytics are obvious. Since there is no central NAS server where all the logs are gathered, they would have to ask users to send the error logs manually which is unreliable. Instead of waiting for users to report errors, the NAS vendor decided to be proactive and send error logs to a central service.
This is actually an really interesting way to attack a sensitive network. This is a way of allowing to map the internal network of a sensitive network. Getting access is obviously the main challenge but once you're in there you need to know where you go and what to look for. If you've already got that knowledge when planning the attack to gain entry then you've got the upper-hand. So while it kinda seems like "Ok, so they have a hostname they can't access why do I care?". If you're doing high-end security on your system admin level then this is the sort of small nitpicking that it takes to be the best.
>Hope you didn't name it anything sensitive, like "mycorp-and-othercorp-planned-merger-storage", or something.
So, no one competent is going to do this, domains are not encrypted by HTTPS, any sensitive info is pushed to the URL Path.
I think being controlling of domain names is a sign of a good sysadmin, it's also a bit schizophrenic, but you gotta be a little schizophrenic to be the type of sysadmin that never gets hacked.
That said, domains not leaking is one of those "clean sheet" features that you go for no reason at all, and it feels nice, but if you don't get it, it's not consequential at all. It's like driving at exactly 50mph, like having a green streak on github. You are never going to rely on that secrecy if only because some ISP might see that, but it's 100% achievable that no one will start pinging your internal host and start polluting your hosts (if you do domain name filtering).
So what I'm saying is, I appreciate this type of effort, but it's a bit dramatic. Definitely uninstall whatever junk leaked your domain though, but it's really nothing.
Obl. nitpick: you mean paranoia, presumably. Schizophrenia is a dissociative/psychotic disorder, paranoia is the irrational belief that you’re being persecuted/watched/etc.
Btw, in this case it can’t be paranoia since the belief was not irrational - the author was being watched.
>Btw, in this case it can’t be paranoia since the belief was not irrational - the author was being watched.
Yes, but I mean being overly cautious in the threat model. For example, birds may be watching through my window, it's true and I might catch a bird watching my house, but it's paranoid in the sense that it's too tight of a threat model.
This too is not ideal. It gets saved in the browser history, and if the url is sent by message (email or IM), the provider may visit it.
> Definitely uninstall whatever junk leaked your domain though, but it's really nothing.
We are used to the tracking being everywhere but it is scandalous and should be considered as such. Not the subdomain leak part, that's just how Rachel noticed, but the non advertised tracking from an appliance chosen to be connected privately.
>This too is not ideal. It gets saved in the browser history, and if the url is sent by message (email or IM), the provider may visit it.
Sure. POST for extra security.
> Not the subdomain leak part, that's just how Rachel noticed, but the non advertised tracking from an appliance chosen to be connected privately.
If this were a completely local product, like say a USB stick. Sure. but this is a Network Attached Storage product, and the user explicitly chose to use network functions (domains, http), it's not the same category of issue.
> Sure. but this is a Network Attached Storage product, and the user explicitly chose to use network functions (domains, http), it's not the same category of issue.
Is it fair to say that you're saying that it should be considered normal to expect that network-attached devices (designed and sold by reliable, aboveboard companies) connected to (V)LANs with no Internet access will be configured to use computers that use their management interfaces (whether GUI, CLI, or API) as "jumpboxes" to attempt to phone home with information about their configuration and other such "telemetry"?
Do carefully note what I'm asking: whether it should be considered normal to do this, rather than considering it to be somewhat outrageous. It's obviously possible to do this in the same way that it's obviously possible to do things like scratch the paint on a line of cars parked on the street, or adulterate food and medicine.
I've blown fairly competent colleagues' minds multiple times by showing them the existence of certificate transparency logs. They were very much under the impression that hostnames can be kept secret as a protection against external infrastructure mapping.
Otherwise if you are getting a domain specific certificate, you are obviously giving your cert provider the domains, and why would you assume it would be secret?
This highlights a huge problem with LetsEncrypt and CT logs. Which is that the Internet is a bad place, with bad people looking to take advantage of you. If you use LetsEncrypt for ssl certs (which you should), that hostname gets published to the world, and that server immediately gets pummeled by requests for all sorts of fresh install pages, like wp-admin or phpmyadmin, from attackers.
Unsecured fresh install states that rely on you signing in before an attacker does were always a horrible idea. It's been a welcome change on the Linux side where Linux distros can install with your SSH key and details preloaded so password login is always disabled.
These PHP apps need to change so you first boot the app with credentials so the app is secured at all moments.
It's not just Let's Encrypt, right? CT is a requirement for all Certificate Authorities nowadays. You can just look at the certificate of www.google.com and see that it has been published to two CT logs (Google's and Sectigo's)
Now I get why they want to reduce certificate validity to 20 minutes. The logs will become so spammy then that the bad guys won't be able to scan all hosts in them any more...
Technically logging certificates is not a Requirement of the trust stores, but most web browsers won't accept a certificate which isn't presented with a proof of logging, typically (but not always) baked inside the certificates.
The reason for this distinction is that failing to meet a Requirement for issued certificates would mean the trust stores might remove your CA, but several CAs today do issue unlogged certificates - and if you wanted to use those on a web server you would need to go log them and staple the proofs to your certs in the server configuration.
Most of the rules (the "Baseline Requirements" or BRs) are requirements and must be followed for all issued certificates, but the rule about logging deliberately doesn't work that way. The BRs do require that a CA can show us - if asked - everything about the certificates they issued, and these days for most CAs that's easiest accomplished by just providing links to the logs e.g. via crt.sh -- but that requirement could also be fulfilled by handing over a PDF or an Excel sheet or something.
Why would you care that your hostname on a local only domain is published to the world if it is not reachable from outside? Publicly available hosts are alread published to the world anyway through DNS.
No, it's made by systems made by people, systems which might have grown and mutated so many times that the original purpose and ethics might be unrecognizable to the system designers. This can be decades in the case of tech like SMTP, HTTP, JS, but now it can be days in the era of Moltbots and vibecoding.
I like only getting *.domain for this reason. No expectation of hiding the domain but if they want to figure out where other things are hosted they'll have to guess.
That’s really not a great fix. If those hostnames leak, they leak forever. I’d be surprised if AV solutions and/or windows aren’t logging these things.
Let's Encrypt has nothing to do with this problem (of Certificate Transparency logs leaking domain names).
CA/B Forum policy requires every CA to publish every issued certificate in the CT logs.
So if you want a TLS certificate that's trusted by browsers, the domain name has to be published to the world, and it doesn't matter where you got your certificate, you are going to start getting requests from automated vulnerability scanners looking to exploit poorly configured or un-updated software.
Wildcards are used to work around this, since what gets published is *.example.com instead of nas.example.com, super-secret-docs.example.com, etc — but as this article shows, there are other ways that your domain name can leak.
So yes, you should use Let's Encrypt, since paying for a cert from some other CA does nothing useful.
Another big way you get scooped up, having worked in that industry among other things - is that anybody - internal staff, customers, that one sales guy who insists on using his personal iPhone to demo the product and everybody turns a blind eye because he made $14M in sales last year - calls some public DNS resolver and the public DNS server sells those names --- even though the name didn't "work" because it wasn't public.
They don't sell who asked because that's a regulatory nightmare they don't want, but they sell the list of names because it's valuable.
You might buy this because you're a bad guy (reputable sellers won't sell to you but that's easy to circumvent), because you're a more-or-less legit outfit looking for problems you can sell back to the person who has the problem, or even just for market research. Yes, some customers who own example.com and are using ZQF brand HR software won't name the server zqf.example.com but a lot of them will and so you can measure that.
Statistically amount of parasite scanning on LE "secured" domains is way more compared to purchased certficates. And yes, this is without voluntary publishing on LE side.
I am not entirely aware what LE does differently, but we had very clear observation in the past about it.
Clueless lol. This is not about any of that. I run Plex on my local network at plex.domain.com. Plex sends logs to the internet with its local domain in the string. Leak. There is no easy way to solve this without deeply inspecting each packet a service sends outside your network, and even that doesn't work when services use SSL certificates and certificate pinning preventing MITMs.
You sound so confident about this and yet you're listing a bunch of useless advice that doesn't work, because the analytics are integrated into the web interface and therefore executed inside the web browser. To guard against that, you'd have to block all outbound connections on your laptop and all other devices that could potentially access the web interface.
Its great to be clueless, thats how you learn! Just dont flex and demean other people like "Coming from someone who worked at FAANG, this is sub par post." if you're clueless. Again everything you've said does not really apply here or is impractical.
Blocking dns leaks from the local network will not prevent sentry from sending them to the cloud. Blocking sentry from reaching the cloud (like said in the post) will.
> Around this time, you realize that the web interface for this thing has some stuff that phones home, and part of what it does is to send stack traces back to sentry.io. Yep, your browser is calling back to them, and it's telling them the hostname you use for your internal storage box. Then for some reason, they're making a TLS connection back to it, but they don't ever request anything. Curious, right?
Unless you actively block all potential trackers (good luck with that one lol), you're not going to prevent leaks if the web UI contains code that actively submits details like hostnames over an encrypted channel.
I suppose it's a good thing you only wasted 30 seconds on this.
Wow, just skip the "bad post", "took me 30 seconds", "Basic stuff" parts already, especially when you are completely missing the point and don't seem to realize it even after several people point it out.
Show some humility.
What's more, one doesn't really read Rachel for her potential technical solutions but because one likes her story telling.
Haha, this obtuse way of speech is such a classic FAANG move. I wonder if it’s because of internal corporate style comms. Patio11 also talks like this. Maybe because Stripe is pretty much a private FAANG.
Fancy web interfaces are road to hell. Do simplest thing that works. Plain apache or nginx with webdav, basic auth(proven code, minimal attack surface). Maybe firewall with ip_hashlimit on new connections. I have it set to 2/minute and for browser it's actually fine, while moronic bots make new connection for every request. When they improve, there's always fail2ban.
That the nas server incl. hostname is public does not bother me then.
I think people are misunderstanding. This isn't CT logs, its a wildcard certificate so it wouldn't leak the "nas" part. It's sentry catching client-side traces and calling home with them, and then picking out the hostname from the request that sent them (ie, "nas.nothing-special.whatever.example.com") and trying to poll it for whatever reason, which is going to a separate server that is catching the wildcard domain and being rejected.
My first thought was perhaps they're trying to fetch a favicon for rendering against the traces in the UI?
They're likely trying to retrieve source maps
Sounds like a great way to get sentry to fire off arbitrary requests to IPs you don’t own.
sure hope nobody does that targeting ips (like that blacklist in masscan) that will auto report you to your isp/ans/whatever for your abusive traffic. Repeatedly.
Obligatory Bruce Scneier: https://www.schneier.com/blog/archives/2008/03/the_security_...
Hehe, just reading that.
> The poster described how she was able to retrieve her car after service just by giving the attendant her last name. Now any normal car owner would be happy about how easy it was to get her car back, but someone with a security mindset immediately thinks: “Can I really get a car just by knowing the last name of someone whose car is being serviced?”
Just a couple of hours ago, I picked my car up from having its obligatory annual vehicle check. I walked past it and went into their office, saying "I'm here to pick up my car". "Which one is it?" "The Golf" "Oh, the $MODEL?" (it was the only Golf in their car park) "Yeah". And then after payment of £30, the keys were handed over without checking of anything, not even a confirmation of my surname. This was a different guy to the one who was in there an hour earlier when I dropped the car off.
I feel like that car security situation also is sort of setup to tell us about how folks with a security mindset can go overboard?
Some car dealership who never had a car stolen hires a consultant and they identify this pickup situation as a problem. Then they implement some wild security and now customers who just dropped off their car, just talked to the same customer service person about the weather ... have to go through some extra security to impersonally prove who they are, because someone imagined a problem that has never occurred (or nearly never). But here we go doing the security dance because someone imagined a problem that really has nothing to do with how people actually steal cars...
Computers and the internet are different of course, the volume of possibilities / bad actors you could be exposed to are seemingly endless. Yet even there security mindset can go overboard.
I'm currently trying to recover/move some developer accounts for some services because we had someone leave the company less than gracefully. Often I have my own account, it's part of an organization ... but moving ownership is an arduous and bizarrely different process for each company. I get it, you wouldn't want someone to take over our no name organization, but the process all seem to involve extra steps piled on "for security". The fact that I'm already a customer, have an account in good standing, part of the organization, the organization account holder has been inactive ... doesn't seem to matter at all, I may as well be a stranger from the outside, presumably because of "security".
It certainly feels that way here in 2026. It seems like I'm spending so much time "verifying" and "authenticating" and clicking somewhere so that the service can send me a code in E-mail. And more and more services are getting super aggressive. Biometrics, 2FA, uploading government ID, uploading face scans... Good grief!
I can imagine being in info-sec is a rough life. When you get breached, they're blamed. So they spend all their time red-teaming and coming up with outlandish ways that their systems can be compromised, and equally outlandish hoops for users to jump through just to use their product. So the product gets all these hoops. And then an attacker gets even more creative, breaches you again, and now your product has horrible UX + you're still getting breached.
The way so-called ‘2fa’ has been implemented on 90% of the things I interact with as a consumer is an absolute farce. Control of a SIM is nearly 100% of the time sufficient to get absolute control of any account, and showing a $50 fake ID to a teenager at a cell phone store has probably a 99% success rate. Only sites for nerds, plus Google and Microsoft, support TOTP or passkeys. Everywhere else uses the sms BS for 2fa or often effectively 1fa if it can be used to reset the first factor. And these same idiots lecture you for your 100-character password for not containing “at least one of these SIX “special characters”, an upper, a lower, and a digit. `Password1!` is a suitable password to these systems.
On the flip side... I can't tell you how many times I've had to explain how public/private key crypto works do developers and IT security staff working in government projects. And this is just for one-way trust of JWTs for SSO integrations.
I mean, I don't mind if the same dev public-keys are used nearly everywhere in internal dev and testing environments... but JFC, don't deploy them to client infrastructure for our apps.
FWIW, aside... for about the last decade, I generally separate auth from the application I'm working with, relying on a limited set of established roles and RSA signed JWTs, allowing for the configuration of one or more issuers. This allows for a "devauth" that you can run locally for a whoever you want usage. While more easily integrating into other SSO systems and bridges with other auth services/systems in differing production environments. Even with firm SSO/Ouath, etc services, it's still the gist of configuration.
And then some person realizes that government ids can be faked, so they set up a system of doing a retinal scan of the person dropping off the car and then comparing it to the retinal scan of the person picking it up.
Then they realize that one person may be bribed so they require at least two people to verify at pickup and drop off.
Meanwhile, a car has never ever been stolen this way.
And when I need my wife to pickup my car for me because I took hers to work and she's taking an Uber to get my car...?
Definitely over the top issue.
Yup, it's taking me probably 10x longer gathering legitimate documents to send to these companies.
Meanwhile I could fake them all in a fairly short amount of time...
It’s a risk/reward scenario, and an example of security minded people chasing ghosts.
The likelihood of conmen stealing VW Golfs from repair shops is a really low risk/high impact event. So they could demand your passport and piss you off or have you leave a happy customer.
In the remote chance the con artist strikes, it’s a general liability covered by insurance.
The difference is that car theft is still prosecuted by police, where as cybercrime is not (unless you embarrass a huge corporation).
So the garage can have lower security because even potential thieves do a risk/reward calculation and the vast majority choose not to proceed with it.
Online, the risk/reward calculation is different (what risk?), so more people will be tempted to try (even for the lolz - not every act of cybercrime is done for monetary purposes).
The fact that so many things in the world work like this is the reason for the continued appeal of heist movies. Those always contain clever bits of social engineering and confidence scams which move the plot along - and they are as believable today as they always were.
Aren't there easier ways to steal cars? Like, go to an open parking lot, pick the lock, and start the car by connecting the right wires.
It's risky, sure. But the garage situation also seems risky.
It's even easier than that. A lot of older ignition locks could be defeated by a screwdriver so you just smash the window, jimmy the ignition lock with the screw driver and off you go! There was a specific model of jeep that was stolen a lot because the rear lock could be popped out easily with pliers, a matching key made, and you return later with the key to steal the car.
You'd have to be stupid and desperate to steal from a garage.
The people who work there aren't office workers; you've got blue collar workers who spend all day working together and hanging out using heavy equipment right in the back. And they're going to be well acquainted with the local tow truck drivers and the local police - so unless you're somewhere like Detroit, you better be on your way across state lines the moment you're out of there. And you're not conning a typical corporate drone who sees 100 faces a day; they'll be able to give a good description.
And then what? You're either stuck filing off VINs and faking a bunch of paperwork, or you have to sell it to a chop shop. The only way it'd plausibly have a decent enough payoff is if you're scouting for unique vehicles with some value (say, a mint condition 3000GT), but that's an even worse proposition for social engineering - people working in a garage are car guys, when someone brings in a cool vehicle everyone's talking about it and the guy who brought it in. Good luck with that :)
Dealership? Even worse proposition, they're actual targets so they know how to track down missing vehicles.
If you really want to steal a car via social engineering, hit a car rental place, give them fake documentation, then drive to a different state to unload it - you still have to fake all the paperwork, and strip anything that identifies it as a rental, and you won't be able to sell to anyone reputable so it'll be a slow process, and you'll need to disguise your appearance differently both times so descriptions don't match later. IOW - if you're doing it right so it has a chance in hell of working, that office job starts to sound a whole lot less tedious.
Way easier to just write code :)
Stolen cars are often sold for low amounts of money - like $50 - and then used to commit crimes that are not traceable from their plates. It hasn't really been possible to steal and resell a car in the United States for many years, barring a few carefully watched loopholes (Vermont out-of-state registrations is one example that was recently closed).
When Kia and Hyundai were recently selling models without real keys or ignition interlocks, that was the main thing folks did when they stole them.
In Canada there's been a big problem with stolen cars lately. Mostly trucks, and other high value vehicles though. Selling them locally isn't feasible, but there's a criminal organization that's gotten very good at getting them on container ships and out to countries that don't care if the vehicles are stolen. So even with tracking, there's nothing people can do. Stopping it at the port is the obvious fix, but somehow that's not what is being done. Probably bribery to look the other way.
Yeah, the only way to do it would be a cash transaction where you'd have to forge a legitimate looking title/registration and pass it off to a naive buyer. So it's still technically possible, but not in any kind of remotely scalable way.
I reckon it is infinitely riskier to be caught attempting to break into a car than it is to just walk in to a service garage and pretending you own the Vdub in the parking lot. There is still a bit of deniability in the 2nd option but good luck explaining to the police why you are using a set of tools specifically for picking vehicle locks (because you can't just use regular pick and tension wrenches) to break into a vehicle that you don't own.
Good read, but:
> This kind of thinking is not natural for most people. It’s not natural for engineers. Good engineering involves ...
I have to disagree in the strongest terms. It doesn't matter what it is, the only way to do a good job designing something is to imagine the ways in which things could go wrong. You have to poke holes in your own design and then fix them rather than leaving it to the real world to tear your project to shreds after the fact.
The same thing applies to science. Any even half decent scientist is constantly attempting to tear his own theories apart.
I think Schneier is correct about that sort of thinking not being natural for your typical person. But it _is_ natural (or rather a prerequisite) for truly competent engineers and scientists.
I agree. A good engineer would think about all possible corner cases (). Security is another set of corner cases.
() Just yesterday I had to correct a PR because the engineer did not think of some corner cases. All sorts of corner cases happen in real life.
hmmm I am 50% with you. Imho to be an amazing engineer is to see a problem and find a good(whatever good means) solution. Beeing a good scientist is asking precise questions and finding experiments validating them.
I think its more the nuanced difference between safety and security. Engineers build things so they run safe. For example building a roof that doesnt collapse is a safe roof. Is the roof secure? Maybe I can put thermites in the wood...
this is the difference. Safety is no harm done from the thing itself Engineers build and security is securing the thing from harm from outside.
That is true, but security is similarly subject to the need to constrain threat models to those that are relevant. The scientist doesn't need to worry about mass production, the engineer (in most cases) doesn't need to worry about someone taking a chain saw to it.
Security will have a wider scope by default (unlike natural phenomena, attacks are motivated and can get pretty creative after all) but there will still be some boundary outside of which "not my problem" applies. Regardless, it's the same fundamental thought pattern in use. Repeatedly asking "what did I overlook, what unintended assumptions did I make, how could this break".
That said, admittedly by the time you make it to the scale of Google or Microsoft and are seriously considering intelligence agencies as adversaries the sky is the limit. But then the same sort of "every last detail is always your problem" mentality also applies to the engineers and software developers building things that go to space (for example).
Now I'm scared at the idea of termites with thermite!
It wasn't typical in 2008, I think, is the upshot.
people are misunderstanding because the blog post is really confusing and poorly written haha
Hostnames are not private information. There are too many ways how they get leaked to the outside world.
It can be useful to hide a private service behind a URL that isn't easy to guess (less attack surfaces, because a lot of attackers can't find the service). But it needs to be inside the URL path, not the hostname.
In the first example the name is leaked with DNS queries, TLS certificates and many other possibilities. In the second example the secret path is only transmitted via HTTPS and doesn't leak as easy.Marginally better for sure but in this case the path would also have been "leaked" to the sentry instance owned by developers of the the NAS device phoning home. This can happen in zillions of ways and is a good reason to use relatively opaque urls in generally and not "friendly ids" and generally being careful abou putting secrets in URLs.
Just try it. The first example gets attacked by bots nearly immediately after issuing a TLS cert. The second one usually doesn't get detected at all.
What if you have a wildcard cert for *.example.com?
Much better. But you still leave traces from dns queries.
Subfinder has a lot of sources to find subdomains, not only certs: https://github.com/projectdiscovery/subfinder
Curious, does this still apply if http is used exclusively?
Is "clown GCP Host" a technical term I am unaware of, or is the author just voicing their discontent?
Seems to me that the problem is the NAS's web interface using sentry for logging/monitoring, and part of what was logged were internal hostnames (which might be named in a way that has sensitive info, e.g, the corp-and-other-corp-merger example they gave. So it wouldn't matter that it's inaccessible in a private network, the name itself is sensitive information.).
In that case, I would personally replace the operating system of the NAS with one that is free/open source that I trust and does not phone home. I suppose some form of adblocking ala PiHole or some other DNS configuration that blocks sentry calls would work too, but I would just go with using an operating system I trust.
> Is "clown GCP Host" a technical term I am unaware of, or is the author just voicing their discontent?
Clown is Rachel's word for (Big Tech's) cloud.
So, it's basically like Cloud2Butt but with a different word.
She was (or is) at Facebook, and "clowntown" and "clowny" are words you see there.
> She was (or is) at Facebook
was (and she worked at Google too)
> "clowntown" and "clowny" are words you see there.
Didn't know this, interesting!
"Clownshoes" is common as an adjective at Mozilla.
[flagged]
No that's Von Clownstick. I won't link to the video, where Jon Stewart made it up, as that would probably be a bit much, for here.
[flagged]
No it's because lots of stuff is duct taped together and then you have tons of scripts or tooling that was someone's weekend project (to make their oncall burden easier) that they shared around. Usually there'll be a flag like --clowntown or --clowny-xyz when it's obvious to all parties involved that it's destined to destroy everything one day but YOLO (also a common one).
Maybe the AI hype is a misdirect so we will blame LLMs for future tech failures instead of the engineers who built up these services
Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.
You may not owe clown-resemblers better, but you owe this community better if you're participating in it.
We ban accounts that keep posting in this sort of pattern, as yours has, so if you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, we'd appreciate it.
I have contributed a lot and yet have a lot to offer this community. I am not doing anything which violates the rules and I am being generous with my interpretation. Remarking on Zuckerberg and other evil people like the pieces of shit they are, is a legitimate and kind way to interact in this community. I know this because I run a hacker collective and it's common knowledge there -- we are all HN users too.
Thank you for your support and encouragement through the many years I have been here.
Anyone know how she come up with the word or why she chose it?
Maybe from JWZ? https://cdn.jwz.org/images/2016/clown-computing.png
Huh. How did you link to jwz without getting THAT image?
It's another domain, jwz probably didn't set up that redirection on this one.
Probably just because it looks/sounds a little like cloud and has the connotations she wants.
It feels pretty hacker jargon-ish, it has some "hysterical raisins" type wordplay vibes.
Maybe she's a juggalo.
amusingly its a term used by my co-workers to describe anyone thats not them.
Oh well... I suppose humility is your coworker's defining quality? :-)
oh the answer to this is definitive. :-P
"What clown wrote this ... [ runs git blame ] ...erm...never mind."
“When you became Denise, I told all of your colleagues, those clown comics, to fix their hearts or die.”
Your coworkers call you a clown?
I didnt call them workmates.
Hire somebody to make balloon animals in the office for a couple hours, pay in cash, tell the balloonist that your name is [coworker’s name]
I remember the term "clown computing" to describe "cloud computing" from IRC earlier than 2016
I use a localhost TLS forward proxy for all TCP and HTTP over the LAN
There is no access to remote DNS, only local DNS. I use stored DNS data periodically gathered in bulk from various sources. As such, HTTP and other traffic over TCP that use hostnames cannot reach hosts on the internet unless I allow it in local DNS or the proxy config
For me, "WebPKI" has proven useful for blocking attempts to phone home. Attempts to phone home that try to use TLS will fail
I also like adding CSP response header that effectively blocks certain Javascript
It sounds like the blog author gave the NAS direct access to the internet
Every user is different, not everyone has the same preferences
> It sounds like the blog author gave the NAS direct access to the internet
FTFA:
I disagree with your conclusion. The post speaks specifically about interactions with the NAS through a browser being the source of the problem and the use of an OSX application firewall program called Little Snitch to resolve the problem. [0] The author's ~fifteen years of posts demonstrate that she is a significantly accomplished and knowledgeable system administrator who has configured and debugged much trickier things than what's described in the article.It's not impossible that the source of the problem has been misidentified... but it's extremely unlikely. Having said that, one thing I do find likely is that the NAS in question is isolated from the Internet; that's just a smart thing that a savvy sysadmin would do.
[0] I find it... unlikely that the NAS in question is running OSX, so Little Snitch is almost certainly running on a client PC, rather than the NAS.
> Is "clown GCP Host" a technical term I am unaware of, or is the author just voicing their discontent?
The term has been in use for quite some time; It is voicing sarcastic discontent with the hyperscaler platforms _and_ their users (the idea being that the platform is "someone else's computer" or - more up to date - "a landlord for your data"). I'm not sure if she coined it, but if she did then good on her!
Not everyone believes using "the cloud" is a good idea, and for those of us who have run their own infrastructure "on-premises" or co-located, the clown is considered suitably patronising. Just saying ;)
> the idea being that the platform is "someone else's computer"
I have a vague memory of once having a userscript or browser extension that replaced every instance of the word "cloud" with "other peoples' computers". (iirc while funny, it was not practical, and I removed it).
fwiw I agree and I do not believe using "the cloud" for everything is a good idea either, I've just never heard of the word "clown" being used in this way before now.
“Cloud to butt” was popular in the early cloud days. It went around Google internally, and caused some… interesting issues.
I remember ridiculing "cloud computing" by calling it "clown computing" decades ago. It's pretty old and well established snark-jargon, like spelling Micro$oft with a dollar sign.
Also, sometimes, we use the term 'weenie' rather than 'clown'. They are interchangeable.
with clown=cloud, GCP must mean google clown platform
The circus left town, but the clowns are still here.
Stuff like this is why I consider uBlock Origin to be the bare minimum security software for going on the web. The amount of 3rd party scripts running on most pages, constantly leaking data to everybody listening, is just mind boggling.
It's treating a symptom rather than a disease, but what else can we do?
I also have taken to using adguard home on the router. It blocks 15 or 20 percent of all my traffic. It's quite scary how bad the tracking and other nasties has become.
Only way I can think of protecting against this is to put a reverse proxy in front of it, like Nginx, and inject CSP headers to prevent cross site requests. Wouldn't block the NAS server side from making external calls, but would prevent your browser doing it for them as is the case here. Also would prevent stuff like Google Analytics if they have it. If you set up a proxy, you could also give it a local hostname like nas.local or something with a cert signed by your private CA that Nginx knows about, and then point the real hostname at Nginx, which has the wildcard cert.
Bit of a pain to set this all up though. I run a number of services on my home network and I always stick Nginx in front with a restrictive CSP policy, and then open that policy up as needed. For example, I'm running Home Assistant, and I have the Steam plugin, which I assume is responsible for requests from my browser like for: https://avatars.steamstatic.com/HASH_medium.jpg, which are being blocked by my injected CSP policy
P.S. I might decide to let that steam request through so I can see avatars in the UI. I also inject "Referrer-Policy: no-referrer", so if I do decide to do that, at least they wont see my HA hostname in there logs by default.
ATM machine
NPM is pretty painless
I bought a SynologyNAS and I have regretted already 3-4 times. Apart from the software made available from the community, there is very little one can do with this thing.
Using LE to apply SSL to services? Complicated. Non standard paths, custom distro, everything hidden (you can’t figure out where to place the ssl cert of how to restart the service, etc). Of course you will figure it out if you spent 50 hours… but why?
Don’t get me started with the old rsync version, lack of midnight commander and/or other utils.
I should have gone with something that runs proper Linux or BSD.
Unless you know what you are walking into ahead of time I would not recommend Synology to someone who wants to host a bunch of stuff and also wants a NAS. I don’t touch any of the container/apps stuff on my Synology(s), they are simply file servers for my application server. For this purpose, I find Synology rock solid and I’ve been very happy with them.
That said, I’ll probably try out the UniFi NAS offerings in the near future. I believe Synology has semi-walked-back its draconian hard drive policy but I don’t trust them to not try that again later. And because I only use my Synology as a NAS I can switch to something else relatively easily, as long as I can mount it on my app server, I’m golden.
You wanted a server and complain NAS is not just a server.
More like, user wanted an open operating system but chose a proprietary one.
NAS is the primary function. But yes, I want full linux server that I can decide what to install and which protocol to use to upload and/or download files.
Why not just leave the NAS to be a NAS and get a separate server? You're probably better off not trying to overload the NAS to be everything.
Why do I want two things when I can have one? Newer nases with n100 or similar are pretty powerful for the cost/package.
Can you provide some details about this overloading concept?
is there a reason you didn’t consider one of the uGreen NAS’s?
(Copied from an earlier comment of mine)
There are guides on how to mainline Synology NAS's to run up-to-date debian on them: https://forum.doozan.com/list.php
please don't do this to your synology
leave it to serve files and iscsi. it's very good at it
if you leave it alone, no extra software, it will basically be completely stable. it's really impressive
Second this, just use it for files, it’s great for it. 10+ years uptime if you leave it alone.
I bought Synology RS217 for $100 last year and it's the best tech purchase I made in years. The software it comes with is the best web interface I experienced in years. The simplicity, stability and attention to detail reminds me of old macs. I have macmini as application server and did not expect to use Synology for anything but file storage / replication. However it comes with a great torrent client that I use all the time now. We also use Synology Office instead of google docs now. It exceeded all my expectations and when it dies, I will immediately buy one of the new rack stations they offer.
I'm so happy I didn't buy a NAS, Synology or not. I think a proper computer running Linux gives me so much more flexibility.
that's still a NAS.
You can run a container on Synology and install your custom services, tools there. At least that is what I do. For custom kernel modules you still need a Synology package for something like Wireguard.
If you have OPNSense, it has an ACME plugin with Synology action. I use that to automatically renew and push a cert to the NAS.
That said, since I like to tinker, Synology feels a bit restricted, indeed. Although there is some value in a stable core system (like these immutable distros from Fedora Atomic).
The extremely old kernel on Synology makes it hard or impossible to run some containers.
I have a fairly recent DS920+ and never had issues with containers - I have probably 10+ containers on it - grafana, victoriametrics/logs, jellyfin, immich with ML, my custom ubuntu toolboxes for net, media, ffmpeg builds, gluetun for vpn, homeassistant, wallabag,...
Edit: I just checked Grafana and cadvisor reports 23 containers.
Edit2: 4.4.302+ (2022) is my kernel version, there might be specific tools that require more recent kernels, of course, but I was so far lucky enough to not run into those.
> Using LE to apply SSL to services? Complicated.
https://github.com/JessThrysoee/synology-letsencrypt
> there is very little one can do with this thing.
It has a VMM and Docker. Entware / opkg exist for it. There's very little that can't be done, but expecting to use an appliance that happens to be Linux-based as a generic Linux server is going to lead to challenges. Be it Synology, TrueNAS, or anything else.
I personally have been blocking sentry and all relevant domains on my machines. I understand this is not a generally applicable advice. For me that’s the right choice
Having recently set up sentry, at least one of the ways they use this is to auto-configure uptime monitoring.
Once they know what hosts you run, it'll ping that hostname periodically. If it stays up and stable for a couple days, you'll get an alert in product: "Set up uptime monitoring on <hostname>?"
Whether you think this is valid, useful, acceptable, etc. is left as an exercise to the reader.
Expansion opportunities
Reverse address lookup servers routinely see escaped attempts to resolve ULA and rfc1918. If you can tie the resolver to other valid data, you know inside state.
Public services see one way (no TCP return flow possible) from almost any source IP. If you can tie that from other corroborated data, the same: you see packets from "inside" all the time.
Darknet collection during final /8 run-down captured audio in UDP.
Firewalls? ACLs? Pah. Humbug.
"Darknet collection during final /8 run-down captured audio in UDP."
Mind elaborating on this? SIP traffic from which year?
2010/2011 time frame. Google and others helped sink the traffic, all written up at apnic labs. It's how 1.1.1.0/24 got held back from general release.
e.g. https://www.potaroo.net/studies/103-slash8/103-slash8.pdf and https://conference.apnic.net/news-archives/2010/network-1/as...
RTP I’d say
I have investigated similar situation on Heroku. Heroku assigns a random subdomain suffix for each new app, so URLs of apps are hard to guess and look like this: test-app-28a8490db018.herokuapp.com. I have noticed that as soon as a new Heroku app is created, without making any requests to the app that could leak the URL via a DNS lookup, the app is hit by requests from automatic vulnerability scanning tools. Heroku confirmed that this is due the new app URL being published in certificate authority logs, which are actively monitored by vulnerability scanners.
> certificate authority logs, which are actively monitored by vulnerability scanners
That sounds like a large kick-me sign taped to every new service. Reading how certificate transparency (CT) works leads me to think that there was a missed opportunity to publish hashes to the logs instead of the actual certificate data. That way a browser performing a certificate check can verify in CT, but a spammer can't monitor CT for new domains.
https://certificate.transparency.dev/howctworks/
Really? Is that new? My apps use wildcard domains: https://i.postimg.cc/SQ82S0Dp/image.png
This applies only to Heroku Fir and Cedar apps (apps that run in Heroku Private Spaces). Heroku Common Runtime apps still use shared wildcard certificate and their domains are not discoverable like this.
Isn't the article over emphasising a little bit on leakage of internal urls ?
Internal hostnames leaking is real, but in practice it’s just one tiny slice of a much larger problem: names and metadata leak everywhere - logs, traces, code, monitoring tools etc etc.
Is it a real problem? My internal hostnames resolve to RFC-1918 addresses and I have a firewall. If I wasn't so lazy, I'd use split DNS.
In other words: never put sensitive information in names and metadata.
Or name them after little bobby tables.
Is there some sort of injection that's a legal host name?
DNS naming rules for non-Unicode are letters, numbers, and hyphens only, and the hyphens can't start or stop the domain. Unicode is implemented on top of that through punycode. It's possible a series of bugs would allow you to punycode some sort of injection character through into something but it would require a chain of faulty software. Not an impossibly long chain of faulty software by any means, but a chain rather than just a single vulnerability. Punycode encoders are supposed leave ASCII characters as ASCII characters, which means ASCII characters illegal in DNS can't be made legal by punycoding them legally. I checked the spec and I don't see anything for a decoder rejecting something that jams one in, but I also can't tell if it's even possible to encode a normal ASCII character; it's a very complicated spec. Things that receive that domain ought to reject it, if it is possible to encode it. And then it still has to end up somewhere vulnerable after that.
Rules are just rules. You can put things in a domain name which don't work as hostnames. Really the only place this is enforced by policy is at the public registrar level. Only place I've run into it at the code level is in a SCADA platform blocking a CNAME record (which followed "legal" hostname rules) pointing to something which didn't. The platform uses jython / python2 as its scripting layer; it's java; it's a special real-time java: plenty of places to look for what goes wrong, I didn't bother.
People should know that they should treat the contents of their logs as unsanitized data... right? A decade ago I actually looked at this in the context of a (commercial) passive DNS, and it appeared that most of the stuff which wasn't a "valid" hostname was filtered before it went to the customers.
This is exactly why I have a number of "appliances" which never get clown updates: have addresses in a subnet I block at the segment edge, have DNS which never answers, and there are a few entries in the "DNS firewall" [0] (RPZ) which mostly serve as canaries.
This is the problem with the notion that "in the name of securitah IoT devices should phone home for updates": nobody said "...and map my network in the name of security"
[0] Don't confuse this with Rachel's honeypot wildcarding *.nothing-special.whatever.example.com for external use.
Is this a Chrome/Edge thing? Or do privacy respecting browsers also do this? If so, it's unexpected.
If Firefox also leaks this, I wonder if this is something mass-surveillance related.
(Judging from the down votes I misunderstood something)
From what I understand, sentry.io is like a tracing and logging service, used by many organizations.
This helps you (=NAS developer) to centralize logs and trace a request through all your application layers (client->server->db and back), so you can identify performance bottlenecks and measure usage patterns.
This is what you can find behind the 'anonymized diagnostics' and 'telemetry' settings you are asked to enable/consent.
For a WebUI it is implemented via javascript, which runs on the client's machine and hooks into the clicks, API calls and page content. It then sends statistics and logs back to, in this case, sentry.io. Your browser just sees javascript, so don't blame them. Privacy Badger might block it.
It is as nefarious as the developer of the application wants to use it. Normally you would use it to centralize logging, find performance issues, and get a basic idea on what features users actually use, so you can debug more easily. But you can also use it to track users. And don't forget, sentry.io is a cloud solution. If you post it on machines outside your control, expect it to be public. Sentry has a self-hosted solution, btw.
My employer uses Sentry for (backend) metrics collection so I had to unblock it to do my job. I wish Sentry would have separate infra for "operating on data collected by Sentry" and "submit every mouse click to Sentry" so I could block their mass surveillance and still do my job, but I suppose that would cut into their profit margins.
My current solution is a massive hack that breaks down every now and then.
Most organizations I've set Sentry up for tunnel the traffic through their own domain, since many blocking extensions block sentry requeats by default. Their own docs recommend it as well. All that to say, it's not trivial to fully block it and you were probably sending telemetry anyway even with the domain blocked.
With the right tricks (CNAME detection, URL matching) a bunch of ad blocking tools still pick up the first-party proxies, but that only works when directly communicating with the Sentry servers.
Quite a pain that companies refuse to take no for an answer :/
Well somehow Rachel's website is not sending back any response now.
Oh god this sucks, i've been setting up lots of services on my NAS pointing to my own domains recently. Can't even name the domains on my own damn server with an expectation of privacy now.
The (somewhat affordable) productized NASes all suffer from big tech diseases.
I think a lot of people underestimate how easy a "NAS" can be made if you take a standard PC, install some form of desktop Linux, and hit "share" on a folder. Something like TrueNAS or one of its forks may also be an option if you're into that kind of stuff.
If you want the fancy docker management web UI stuff with as little maintenance as possible, you may still be in the NAS market, but for a lot of people NAS just means "a big hard drive all of my devices can access". From what I can tell the best middle point between "what the box from the store offers" and "how do build one yourself" is a (paid-for) NAS OS like HexOS where analytics, tracking, and data sales are not used to cover for race-to-the-bottom pricing.
Actually I host everything on a linux PC/server, but a different box runs PFSense and a local DNS resolver so I was talking about setting up a split-brain DNS there. So I don't have to manually edit the hosts file on every machine and keep it up to date with IP changes. Personally I really like docker compose, its made running the little homeserver very easy.
Personally, I've started just using mDNS/Bonjour for local devices. Comes preinstalled on most devices (may need a manual package on BSD/Linux servers) and doesn't require any configuration. Just type in devicename.local and let the network do the rest. You can even broadcast additional device names for different services, so you don't need to do plex.nas.local, but can just announce plex.local and nas.local from the same machine.
There's a theoretical risk of MitM attacks for devices reachable over self-signed certificates, but if someone breaks into my (W)LAN, I'm going to assume I'm screwed anyway.
I've used split-horizon DNS for a couple of years but it kept breaking in annoying ways. My current setup (involving the pihole web UI because I was sick of maintaining BIND files) still breaks DNSSEC for my domain and I try to avoid it when I can.
I don't even understand what kind of webui one would want.
All you really need is a bunch of disk and an operating system with an ssh server. Even the likes of samba and nfs aren't even useful anymore.
File history, sharing and user management are some of the common ones I can think of.
A bunch of out-of-the-box NAS manufacturers provide a web-based OS-like shell with file managers, document editors, as well as an "app store" for containers and services.
I see the traditional "RAID with a SMB share" NAS devices less and less in stores.
If only storage target mode[1] had some form of authentication, it'd make setting up a barebones NAS an absolute breeze.
[1]: https://www.freedesktop.org/software/systemd/man/257/systemd...
Storage target mode is block-level, not filesystem-level, meaning it won't support concurrent access and any network hiccup or dropped connection will leave the filesystem in an unclean state.
> ...any network hiccup or dropped connection will leave the filesystem in an unclean state.
Given that the docs claim that this is an implementation of an official NVMe thing, I'd be very surprised if it had absolutely no facility for recovering from intermittent network failure. "The network is unreliable" [0] is axiom #1 for anyone who's building something that needs to go over a network.
If what you report is true, then is the suckage because of SystemD's poor implementation, or because the thing it's implementing is totally defective?
[0] Yes, datacenter (and even home) networks can be very reliable. They cannot be 100% reliable and -in my professional experience- are substantially less than 100% reliable. "Your disks get turbofucked if the network ever so much as burps" is unacceptable for something you expect people to actually use for real.
The real trick, and the reason I don't build my own NAS, is standby power usage. How much wattage will a self built Linux box draw when it's not being used? It's not easy to figure out, and it's not easy to build a NAS optimized for this.
Whereas Synology or other NAS manufacturers can tell me these numbers exactly and people have reviewed the hardware and tested it.
To me, it's a question of time and money efficiency. (Time is money.)
I can buy a NAS, whereby I pay money to enjoy someone else's previous work of figuring it out. I pay for this over and over again as my needs change and/or upgrades happen.
Or
I can build a NAS, whereby I spend time to figure it out myself. The gained knowledge that I retain in my notes and my tiny little pea brain gets to be used over and over again as needs change, and/or upgrades happen. And -- sometimes -- I even get paid to use this knowledge.
(I tend to choose the latter. YMMV.)
There are power meters like KWS-303L that will tell you how much manufacturers lie with their numbers.
For example my ancient tplink TL-WR842N router eats 15W standby or no, while my main box, fans, backlight, gpu, hdds and stuff -- about 80W idle.
Looking at Synology site the only power I see there is the psu rating, which is 90W for DS425. So you can expect real power consumption of about 30-40W. Which is typical for just about any NUC or a budget ATX motherboard with a low-tier AMD-something + a bunch of HDDs.
> Can't even name the domains on my own damn server with an expectation of privacy now.
You never could. A host name or a domain is bound to leave your box, it's meant to. It takes sending an email with a local email client.
(Not saying, the NAS leak still sucks)
I have internal zones in my home network and requests to resolve them never leave the private network. So no, it's not meant to.
"Meant to" may indeed not be really accurate.
However, domains and host names were not designed to be particularly private and should not be considered secret, many things don't consider them private, so you should not put anything sensible in a host name, even in a network that's supposed private. Unless your private network is completely air-gapped.
Now, I wouldn't be surprised that hostnames were in fact originally expected to be explicitly public.
I don't know much about email, but how would some random service send an email from my domain if I've never given it any auth tokens?
You don't need any auth to send an email from your domain, or in fact from any domain. Just set whatever `From` you want.
I've received many emails from `root@localhost` over the years.
Admittedly, most residential ISPs block all SMTP traffic, and other email servers are likely to drop it or mark it as spam, but there's no strict requirement for auth.
You can, but most email providers will immediately reject your email or put it into spam because of missing DKIM/DMARC/SPF
> Admittedly, most residential ISPs block all SMTP traffic, and other email servers are likely to drop it or mark it as spam, but there's no strict requirement for auth.
Source? I've never seen that. Nobody could use their email provider of choice if that was the case.
They don't do DPI, they just look at the destination port. And that's why there's a separate port for submission to mail agents where such auth is expected and thus only outbound mail is typically even attempted to be submitted to. Technically local delivery mail too, e.g. where the From and the To headers are valid and have the same domain.
The 3 most common ISPs in the US are Comcast, Spectrum, and AT&T
Comcast blocks port 25: https://www.xfinity.com/support/articles/email-port-25-no-lo...
AT&T says "port 25 may be blocked from customers with dynamically-assigned Internet Protocol addresses", which is the majority of customers https://about.att.com/sites/broadband/network
What ISP are you using that isn't blocking port 25, and have you never had the misfortune of being stuck with comcast or AT&T as your only option?
Well I am not in the USA for a start but if it is blocked it must be only inbound otherwise it would break everybody.
> if it is blocked it must be only inbound
Yep, at least in France it's like this for ISPs doing this IIRC.
It should not, but it's usual to configure random services to send mails to users, for instance for password resets, or for random notifications.
Another thing usually sending mails is cron, but that should only go to the admin(s).
Some services might also display the host name somewhere in their UI.
https://archive.ph/siEdE
I love that this write-up is hosted both on HTTP and HTTPS. I cannot access the HTTPS version but the HTTP display just fine. Now that's reliability.
> I cannot access the HTTPS version
Curiosity begs: why not?
I opened it on an old computer with an old linux distro with an old browser because old linux distros have reliable and working accessibility features like screen readers and good non-gpu text to speech and advanced keyboard/mouse sharing. Modern linux distros do not. Don't worry, I have javascript execution/etc turned off by default on that machine.
The Clown is my master
I've been chosen!
Eeeeeeeeeah!
I don’t understand. How could a GCP server access the private NAS?
I agree the web UI should never be monitored using sentry. I can see why they would want it, but at the very least should be opt in.
It couldn’t, but it tried.
A for effort, F for firewall.
It said knocking, not accessing
also
> you notice that you've started getting requests coming to your server on the "outside world" with that same hostname.
Not sure why they made the connection to sentry.io and not with CT logs. My first thought was that "*.some-subdomain." got added to the CT logs and someone is scanning *. with well known hosts, of which "nas" would be one. Curious if they have more insights into sentry.io leaking and where does it leak to...
That hypothesis seems less likely and more complicated than the sentry one.
Scanning wildcards for well-known subdomains seems both quite specific and rather costly for unclear benefits.
Bots regularly try to bruteforce domain paths to find things like /wp-admin, bruteforcing subdomains isn't any more complicated
> Bots regularly try to bruteforce domain paths to find things like /wp-admin
Sure, when WordPress powers 45% of all websites, your odds to reach something by hitting /wp-admin are high.
The space of all the possible unknown subdomains is way bigger than a few well known paths you can attack.
I feel like the author would have noticed and said so if she was getting logs for more than just the one host.
But she mentioned: 1) it isn't in DNS only /etc/hosts and 2) they are making a connection to it. So they'd need to get the IP address to connect to from somewhere as well.
From the article:
> You're able to see this because you set up a wildcard DNS entry for the whole ".nothing-special.whatever.example.com" space pointing at a machine you control just in case something leaks. And, well, something did* leak.
They don't need the IP address itself, it sounds like they're not even connecting to the same host.
Unless she hosts her own cert authority or is using a self-signed cert, the wildcard cert she mentions is visible to the public on sites such as https://crt.sh/.
Yes, the wildcard cert, but not the actual hostname under that wildcard.
Because sentry.io is a commercial application monitoring tool which has zero incentive to any kind of application monitoring on non-paying customers. That's just costs without benefits.
You now have to argue that a random third party is using and therefore paying sentry.io to do monitoring of random subdomains for the dubious benefit of knowing that the domain exists even though they are paying for something that is way more expensive.
It's far more likely that the NAS vendor integrated sentry.io into the web interface and sentry.io is simply trying to communicate with monitoring endpoints that are part of said integration.
From the perspective of the NAS vendor, the benefits of analytics are obvious. Since there is no central NAS server where all the logs are gathered, they would have to ask users to send the error logs manually which is unreliable. Instead of waiting for users to report errors, the NAS vendor decided to be proactive and send error logs to a central service.
TIL Rachel uses a Mac.
How do you know?
Little Snitch?
Just getting 404 not found
This is actually an really interesting way to attack a sensitive network. This is a way of allowing to map the internal network of a sensitive network. Getting access is obviously the main challenge but once you're in there you need to know where you go and what to look for. If you've already got that knowledge when planning the attack to gain entry then you've got the upper-hand. So while it kinda seems like "Ok, so they have a hostname they can't access why do I care?". If you're doing high-end security on your system admin level then this is the sort of small nitpicking that it takes to be the best.
>Hope you didn't name it anything sensitive, like "mycorp-and-othercorp-planned-merger-storage", or something.
So, no one competent is going to do this, domains are not encrypted by HTTPS, any sensitive info is pushed to the URL Path.
I think being controlling of domain names is a sign of a good sysadmin, it's also a bit schizophrenic, but you gotta be a little schizophrenic to be the type of sysadmin that never gets hacked.
That said, domains not leaking is one of those "clean sheet" features that you go for no reason at all, and it feels nice, but if you don't get it, it's not consequential at all. It's like driving at exactly 50mph, like having a green streak on github. You are never going to rely on that secrecy if only because some ISP might see that, but it's 100% achievable that no one will start pinging your internal host and start polluting your hosts (if you do domain name filtering).
So what I'm saying is, I appreciate this type of effort, but it's a bit dramatic. Definitely uninstall whatever junk leaked your domain though, but it's really nothing.
Obl. nitpick: you mean paranoia, presumably. Schizophrenia is a dissociative/psychotic disorder, paranoia is the irrational belief that you’re being persecuted/watched/etc.
Btw, in this case it can’t be paranoia since the belief was not irrational - the author was being watched.
You are right, I meant paranoid.
>Btw, in this case it can’t be paranoia since the belief was not irrational - the author was being watched.
Yes, but I mean being overly cautious in the threat model. For example, birds may be watching through my window, it's true and I might catch a bird watching my house, but it's paranoid in the sense that it's too tight of a threat model.
I know analogies are not meant to be perfect, but birds don't mass watch, and don't systematically watch every of your moves neither.
That's what you think...
:-)
One never knows, that owl might be electric.
> any sensitive info is pushed to the URL Path
This too is not ideal. It gets saved in the browser history, and if the url is sent by message (email or IM), the provider may visit it.
> Definitely uninstall whatever junk leaked your domain though, but it's really nothing.
We are used to the tracking being everywhere but it is scandalous and should be considered as such. Not the subdomain leak part, that's just how Rachel noticed, but the non advertised tracking from an appliance chosen to be connected privately.
>This too is not ideal. It gets saved in the browser history, and if the url is sent by message (email or IM), the provider may visit it.
Sure. POST for extra security.
> Not the subdomain leak part, that's just how Rachel noticed, but the non advertised tracking from an appliance chosen to be connected privately.
If this were a completely local product, like say a USB stick. Sure. but this is a Network Attached Storage product, and the user explicitly chose to use network functions (domains, http), it's not the same category of issue.
> Sure. but this is a Network Attached Storage product, and the user explicitly chose to use network functions (domains, http), it's not the same category of issue.
Is it fair to say that you're saying that it should be considered normal to expect that network-attached devices (designed and sold by reliable, aboveboard companies) connected to (V)LANs with no Internet access will be configured to use computers that use their management interfaces (whether GUI, CLI, or API) as "jumpboxes" to attempt to phone home with information about their configuration and other such "telemetry"?
Do carefully note what I'm asking: whether it should be considered normal to do this, rather than considering it to be somewhat outrageous. It's obviously possible to do this in the same way that it's obviously possible to do things like scratch the paint on a line of cars parked on the street, or adulterate food and medicine.
I've blown fairly competent colleagues' minds multiple times by showing them the existence of certificate transparency logs. They were very much under the impression that hostnames can be kept secret as a protection against external infrastructure mapping.
Can't it? If you get a wildcard certificate?
Otherwise if you are getting a domain specific certificate, you are obviously giving your cert provider the domains, and why would you assume it would be secret?
TLS 1.3 has encrypted client hello which encrypts the domain name during an HTTPS connection.
That's one of those features that's not quite standard, but risks getting into paranoid threat models , like DNS over HTTP, residential proxies, Tor.
> "So, no one competent is going to do this"
What about all the people who are incompetant?
Slightly surprised that this blog seems to have succumbed to inbound traffic.
If you're on an apple device, disable private relay. It appears the blog has tar pitted private relay traffic.
It's tar pitting my normal unproxied residential traffic too
Same, plus my VPN connection.
Same here too. Ironically, the blog is accessible over TOR for me.
Rachel has blogged quite a bit about blocking badly behaved RSS Clients in recent years.
I'd link you to one of the articles if I wasn't blocked too, and my VPN wasn't also blocked!
> Rachel has blogged quite a bit about blocking badly behaved RSS Clients in recent years.
Unfortunately that blocking is buggy and overzealous.
I just gave up eventually and unsubscribed from the RSS feed.
Opens fine for me
“Works on my machine”
that's actually a great spy trap idea, no?
create an impossible internal hostname and watch for it to come back to you
you don't even need a real TLD if I am not mistaken, use .ZZZ etc
> you don't even need a real TLD if I am not mistaken, use .ZZZ etc
if it's not a real TLD, you won't ever see the dns requests coming to you...
This highlights a huge problem with LetsEncrypt and CT logs. Which is that the Internet is a bad place, with bad people looking to take advantage of you. If you use LetsEncrypt for ssl certs (which you should), that hostname gets published to the world, and that server immediately gets pummeled by requests for all sorts of fresh install pages, like wp-admin or phpmyadmin, from attackers.
Unsecured fresh install states that rely on you signing in before an attacker does were always a horrible idea. It's been a welcome change on the Linux side where Linux distros can install with your SSH key and details preloaded so password login is always disabled.
These PHP apps need to change so you first boot the app with credentials so the app is secured at all moments.
It's not just Let's Encrypt, right? CT is a requirement for all Certificate Authorities nowadays. You can just look at the certificate of www.google.com and see that it has been published to two CT logs (Google's and Sectigo's)
Now I get why they want to reduce certificate validity to 20 minutes. The logs will become so spammy then that the bad guys won't be able to scan all hosts in them any more...
Technically logging certificates is not a Requirement of the trust stores, but most web browsers won't accept a certificate which isn't presented with a proof of logging, typically (but not always) baked inside the certificates.
The reason for this distinction is that failing to meet a Requirement for issued certificates would mean the trust stores might remove your CA, but several CAs today do issue unlogged certificates - and if you wanted to use those on a web server you would need to go log them and staple the proofs to your certs in the server configuration.
Most of the rules (the "Baseline Requirements" or BRs) are requirements and must be followed for all issued certificates, but the rule about logging deliberately doesn't work that way. The BRs do require that a CA can show us - if asked - everything about the certificates they issued, and these days for most CAs that's easiest accomplished by just providing links to the logs e.g. via crt.sh -- but that requirement could also be fulfilled by handing over a PDF or an Excel sheet or something.
That may be related, but it's not what happened here. Wildcard-cert and all.
Why would you care that your hostname on a local only domain is published to the world if it is not reachable from outside? Publicly available hosts are alread published to the world anyway through DNS.
LetsEncrypt doesn't make a difference at all.
> the Internet is a bad place
FWIW - it’s made of people
No, it's made by systems made by people, systems which might have grown and mutated so many times that the original purpose and ethics might be unrecognizable to the system designers. This can be decades in the case of tech like SMTP, HTTP, JS, but now it can be days in the era of Moltbots and vibecoding.
I like only getting *.domain for this reason. No expectation of hiding the domain but if they want to figure out where other things are hosted they'll have to guess.
So how do you get this ?
Let's Encrypt can issue wildcard certs too
That’s really not a great fix. If those hostnames leak, they leak forever. I’d be surprised if AV solutions and/or windows aren’t logging these things.
> If you use LetsEncrypt for ssl certs (which you should)
You meant you shouldn't right? Partially exactly for the reasons you stated later in the same sentence.
Let's Encrypt has nothing to do with this problem (of Certificate Transparency logs leaking domain names).
CA/B Forum policy requires every CA to publish every issued certificate in the CT logs.
So if you want a TLS certificate that's trusted by browsers, the domain name has to be published to the world, and it doesn't matter where you got your certificate, you are going to start getting requests from automated vulnerability scanners looking to exploit poorly configured or un-updated software.
Wildcards are used to work around this, since what gets published is *.example.com instead of nas.example.com, super-secret-docs.example.com, etc — but as this article shows, there are other ways that your domain name can leak.
So yes, you should use Let's Encrypt, since paying for a cert from some other CA does nothing useful.
Another big way you get scooped up, having worked in that industry among other things - is that anybody - internal staff, customers, that one sales guy who insists on using his personal iPhone to demo the product and everybody turns a blind eye because he made $14M in sales last year - calls some public DNS resolver and the public DNS server sells those names --- even though the name didn't "work" because it wasn't public.
They don't sell who asked because that's a regulatory nightmare they don't want, but they sell the list of names because it's valuable.
You might buy this because you're a bad guy (reputable sellers won't sell to you but that's easy to circumvent), because you're a more-or-less legit outfit looking for problems you can sell back to the person who has the problem, or even just for market research. Yes, some customers who own example.com and are using ZQF brand HR software won't name the server zqf.example.com but a lot of them will and so you can measure that.
Statistically amount of parasite scanning on LE "secured" domains is way more compared to purchased certficates. And yes, this is without voluntary publishing on LE side.
I am not entirely aware what LE does differently, but we had very clear observation in the past about it.
Pennywise found my hostname? We're cooked.
You're IT, I'm IT, We're all IT.
We all use floats down here.
For representing monetary values.
Misconfigured clown - bad news indeed.
[dead]
[flagged]
Clueless lol. This is not about any of that. I run Plex on my local network at plex.domain.com. Plex sends logs to the internet with its local domain in the string. Leak. There is no easy way to solve this without deeply inspecting each packet a service sends outside your network, and even that doesn't work when services use SSL certificates and certificate pinning preventing MITMs.
wtf are you allowing plex to initiate outbound connections to begin with?
and why is plex not in it's own VLAN with a egress FW rules to second with?
lastly, why aren't you running snort/suricata to inspect the packets originating at plex?
let me solve this problem for you - it probably doesn't bother you at all.
otherwise, you'd scratched your itch a long time ago.
> Clueless lol.
It's ok to be clueless. And, it's ok to be working for a FAANG and be clueless too.
> It's ok to be clueless. And, it's ok to be working for a FAANG and be clueless too.
Glad you're not being too hard on yourself :)
You sound so confident about this and yet you're listing a bunch of useless advice that doesn't work, because the analytics are integrated into the web interface and therefore executed inside the web browser. To guard against that, you'd have to block all outbound connections on your laptop and all other devices that could potentially access the web interface.
[flagged]
Its great to be clueless, thats how you learn! Just dont flex and demean other people like "Coming from someone who worked at FAANG, this is sub par post." if you're clueless. Again everything you've said does not really apply here or is impractical.
> [ ... ] if you're clueless.
Done it. Therefore, I flex. I was talking about clueless folks like yourself.
> Again everything you've said does not really apply here or is impractical.
YMMV. Always.
Blocking dns leaks from the local network will not prevent sentry from sending them to the cloud. Blocking sentry from reaching the cloud (like said in the post) will.
From the article:
> Around this time, you realize that the web interface for this thing has some stuff that phones home, and part of what it does is to send stack traces back to sentry.io. Yep, your browser is calling back to them, and it's telling them the hostname you use for your internal storage box. Then for some reason, they're making a TLS connection back to it, but they don't ever request anything. Curious, right?
Unless you actively block all potential trackers (good luck with that one lol), you're not going to prevent leaks if the web UI contains code that actively submits details like hostnames over an encrypted channel.
I suppose it's a good thing you only wasted 30 seconds on this.
[flagged]
Wow, just skip the "bad post", "took me 30 seconds", "Basic stuff" parts already, especially when you are completely missing the point and don't seem to realize it even after several people point it out.
Show some humility.
What's more, one doesn't really read Rachel for her potential technical solutions but because one likes her story telling.
[flagged]
Haha, this obtuse way of speech is such a classic FAANG move. I wonder if it’s because of internal corporate style comms. Patio11 also talks like this. Maybe because Stripe is pretty much a private FAANG.
Fancy web interfaces are road to hell. Do simplest thing that works. Plain apache or nginx with webdav, basic auth(proven code, minimal attack surface). Maybe firewall with ip_hashlimit on new connections. I have it set to 2/minute and for browser it's actually fine, while moronic bots make new connection for every request. When they improve, there's always fail2ban.
That the nas server incl. hostname is public does not bother me then.