I don't think I've ever seen something this exploitable that is so prevalent. Like couldn't you just sit in an airport and open up a wifi hotspot and almost immediately own anyone with ATI graphics?
Some of us do not enable automatic updates (automatic updates are the peak of stupidity since Win98 era). And, when you sit in an airport, you don't update all your programs.
But it seems pretty trivial for some bad actor at local ISP.
Have you ever gone to a crowded public place and setup an open hotspot?
That takes it out of the one day away territory, but it does allow an attacker to only have a malicious HTTP capture up and detectable during the actual attack window.
Then, of course, if you’re also being their DNS server you can send them to the wrong update check server in the first place. I wonder if the updater validates the certificate.
So easy to fix, just... why? My kingdom for an 's'. One of these policies are not like the others. Consider certificates and signatures before categorically turning a blind eye to MitM, please: you "let them in", AMD. Wow.
1. Home router compromised, DHCP/DNS settings changed.
2. Report a wrong (malicious) IP for ww2.ati.com.
3. For HTTP traffic, it snoops and looks for opportunities to inject a malicious binary.
4. HTTPS traffic is passed through unchanged.
__________
If anyone still has their home-router using the default admin password, consider this a little wake-up call: Even if your new password is on a sticky-note, that's still a measurable improvement.
The risks continue, though:
* If the victim's router settings are safe, an attacker on the LAN may use DHCP spoofing to trick the target into using a different DNS server.
* The attacker can set up an alternate network they control, and trick the user into connecting, like for a real coffee shop, or even a vague "Free Wifi."
Op and others are saying DNS poisoning is a popular way of achieving that goal.
I'm skeptical to say the least. Industry standard has been to ignore MitM or certificates/signatures, not everything.
The blog post title is "AMD won't fix", but the actual response that is quoted in the post doesn't actually say that! It doesn't say anything about will or won't fix, it just says "out of scope", and it's pretty reasonable to interpret this as "out of scope for receiving a bug bounty".
It's pretty careless wording on the part of whoever wrote the response and just invites this kind of PR disaster, but on the substance of the vulnerability it doesn't suggest a problem.
Man in the middle attacks may be "out of scope" for AMD, but they're still "in scope" for actual attackers.
Ignoring them is indefensibly incompetent. A policy of ignoring them is a policy of being indefensibly incompetent.
Though, by publishing this blog and getting on the HN front page, it really skews this datapoint, so we can never know if it's a valid editorialization.
Edit: Ah, someone else in this thread called out the "wont fix" vs "out of scope" after I clicked on reply: https://news.ycombinator.com/item?id=46910233. Sorry.
Of course, a company can do it (they just did!), but it shows that they don't care about security at all.
Especially if the answer is "sorry this is out of scope" rather than "while this is out of scope for our bug bounty so we can't pay you, this looks serious and we'll make sure to get a patch out ASAP".
Your characterization of this bug as one "that completely pwn your machine just by connecting it to an untrusted network" is also hyperbolic to the extreme.
Also, if AMD is getting overwhelmed with security reports (a la curl), it's also not surprising. Particularly if people are using AI to turn bug bounties into income.
Lastly if it requires a compromised DNS server, someone would probably point out a much easier way to compromise the network rather than rely upon AMD driver installer.
The fact is allowing any type of unsigned update on HTTP is a security flaw in itself.
>someone would probably point out a much easier way to compromise the networ
No, not really. That's why every other application on the planet that does security of any kind uses either signed binaries or they use HTTPSONLY. Simply put allowing HTTP updates is insecure. The network should never be by default trusted by the user.
What's even fucking dumber on AMDs part is this is just one BGP hijacking from a worldwide security incident.
Reminds me about ten years or so ago when I was installing Debian or something and I noticed the URL for the apt install mirrors were http and not https. People helpfully pointed out this is a non issue because the updates are signed.
Ok I guess but then why did Debian switch to https?
Whether you agree with whether this rule should be out-of-scope or not is a separate issue.
What I'm more curious about is the presence of both a Development and Production URL for their XML files, and their use of a Development URL in production. While like the author said, even though the URL is using TLS/SSL so it's "safe", I would be curious to know if the executable URLs are the same in both XML files, and if not, I would perform binary diffing between those two executables.
I imagine there might be some interesting differential there that might lead to a bug bounty. For example, maybe some developer debug tooling that is only present only in the development version but is not safe to use for production and could lead to exploitation, and since they seemed to use the Development URL in production for some reason...
No, just no. This is not a separate issue. It is 100% the issue.
Lets say I'm a nation state attacker with resources. I write up my exploit and then do a BGP hijack of whatever IPs the driver host resolves to.
There you go, I compromised possibly millions of hosts all at once. You think anyone cares that this wasn't AMDs issue at this point?
I already said I do not like that it is just using HTTP, and yes, it is problematic.
What I am saying is that the issue the author reported and the issue that AMD considers man-in-the-middle attacks as out-of-scope, are two separate issues.
If someone reports that a homeowner has the keys visibly on top of their mat in front of their front-door, and the homeowner replies that they do not consider intruders entering their home as a problem, these are two separate issues, with the latter having wider ramifications (since it would determine whether other methods and vectors of mitm attacks, besides the one the author of the post reported, are declared out-of-scope as well). But that doesn't mean the former issue is unimportant, it just means that it was already acknowledged, and the latter issue is what should be focused on (At least on AMD's side. It still presents a problem for users who disagree with AMD of it being out-of-scope).
Genuine question, How does it sound like I'm dismissing it? My first sentence begins with the the phrase
> I don't like that the executable's update URL is using just plain HTTP
And my second sentence
> Whether you agree with whether this rule should be out-of-scope or not is a separate issue.
which, with context that AMD reported MITM as out-of-scope, clearly indicates that I think of it as an issue, albeit, a separate one from the one the author already reported.
http://www2.ati.com/...
I'm blocking port 80 since forever so there's that.But now ati.com is going straight into my unbound DNS server's blocklist.
I am pretty sure, a nation state wanting to hack an individual's system has way more effective tools at their disposal.
What the hell is more effective than getting root with a trivial MITM?
Not only is it effective, it's stealthy, in that it doesn't out you. It's obviously possible to both find and exploit it without a huge investment, which means nobody knows you're a nation state when you use it. You don't have to risk burning any really arcane zero-days or any hard to replace back doors.
Nation states are absolutely going to use things like that. And so is everybody else.
For whatever reason, distro maintainers working for free seem a lot more competent with security than billion dollar hardware vendors
I don't believe that these billion dollar hardware vendors are really incompetent with security. It's rather that the distro maintainers do care quite a bit about security, while for these hardware vendors consider these security concerns to be of much smaller importance; for their business it is likely much more important to bring the next hardware generation to the market as fast as possible.
In other words: distro maintainers and hardware vendors are simply interested in very different things and thus prioritize things very differently.
It's shortsighted, but modern capitalism is more shortsighted than Mr. Magoo.
https://www.legalexaminer.com/lestaffer/legal/gm-recall-defe...
Will fixing this issue bring in more revenue than ignoring it and building a new feature? Or fixing a different issue? If the answer is "no" then the answer is that it doesn't get fixed.
I don't agree with this, because it pre-supposes that there's a limited number of engineers available. The question isn't "shall I pull engineer X off project Y so that he can fix security bugs?", it's "shall I hire an additional engineer to fix security bugs?". The comment above mine suggests the answer to that question is "no, because it's too expensive to do that compared to just paying to clean up security breaches after they happen", which is what I was questioning in my first comment.
When framed correctly (there's effectively an unlimited labour supply for most companies, and effectively a limited demand for staff) then the question becomes "shall we hire an engineer to fix security bugs when we don't need an engineer for anything else?".
In effect, there is, yes. At the very least, there’s more high value work that most companies can do than there are engineers to do said work. There’s a reason literally every leadership course teaches you how to say “no” over and over again.
I think it's more realistic that in any sufficiently large company the bureaucracy is so unwieldy that sensible decisions become difficult to make and implement.
Only other software I regularly use that I think is overall high quality and I enjoy using are the JetBrains IDEs, and the Telegram mobile app (though the Premium upselling has gotten kinda gross the past few years)
An absurd amount of weight is carried by a small number of very influential people that can and want to just do a good job.
And a signal that they're the best is you don't see them in the news.
We need more very influential people who aren't newsworthy.
With Linux itself, it helps that they are working in public (whether volunteering or as a job), and you'd be sacked not in a closed-door meeting, but on LKML for everyone to see if you screw up this badly.
Apt has had issues where captive portals corrupt things. GPG has had tons of vulnerabilities in signature verification (but to be fair here, Apt is being migrated to Sequoia, which is way better).
But these distros are still exposing a much larger attack surface compared to just a TLS stack.
It's the shittest autoupdater I had to ever deal with. It never actually managed to install an update.
I don't normally call for people to get fired from their jobs, but this is so disgusting to anyone who takes even a modicum of pride in their contribution to society.
Surely, someone gets fired for dismissing a legitimate, easily exploited RCE using a simple plaintext HTTP MITM attack as a WONTFIX... Right???
No https:// and no cryptographic signature nor checksum that I can see. This makes it almost trivial for any nation-state to inject malware into targeted machines.
I removed AMD auto-update functionality from Windows boxen. (And I won't install anything similar on Linux.) And, besides, the Windows auto-update or check process hangs with a blank console window regularly.
Such trashy software ruins the OOBE of everything else. Small details attention zen philosophy and all that.
It really makes you wonder what level of dysfunction is actually possible inside a company. 30k employees and they can't get one of them to hook up certbot, and add an 's' to the software.
The threat model here is that compromised or malicious wifi hotspots (and ISPs) exist that will monitor all unencrypted traffic, look for anything being downloaded that's an executable, and inject malware into it. That would compromise a machine that ran this updater even if the malware wasn't specifically looking for this AMD driver vulnerability, and would have already compromised a lot of laptops in the past.
Don't understand why most people mean auto updating software would in any way create more security. It just creates more attack vectors for every software that has a auto updater.
An auto-update mechanism only becomes an RCE if it allows unauthorized third parties to execute code on your machine by failing to verify that the code comes from a legitimate source.
> you just need the key
Secrecy of cryptographic keys is the basis of all cryptography we use. There's no "just", you need the key and you don't have it.
If they lose just one customer over this they're losing more than the minimum $500 bounty. They also signal to the world that they care more about some scope document than actually improving security, discouraging future hackers from engaging with their program.
This would be a high severity vulnerability so even paying out $500 for a low severity would be a bit of a disgrace.
What's the business case for screwing someone out of a bounty on a technicality?
A lot of people have brought this up over the years:
https://www.reddit.com/r/AMDHelp/comments/ysqvsv/amd_autoupd...
(I'm fairly sure I have even mentioned AMD doing this on HN in the past.)
AMD is also not the only one. Gigabyte, ASUS, many other autoupdaters and installers fail without HTTP access. I couldn't even set up my HomePod without allowing it to fetch HTTP resources.
From my own perspective allowing unencrypted outgoing HTTP is a clear indication of problematic software. Even unencrypted (but maybe signed) CDN connections are at minimum a privacy leak. Potentially it's even a way for a MITM to exploit the HTTP stack, some content parser or the application's own handling. TLS stacks are a significantly harder target in comparison.
Package managers generally enforce authenticity through signed indexes and (directly or indirectly) signed packages, although be skeptical when dealing with new/minor package managers as they could have gotten this wrong.
Saying it's "fair" is like saying engine maintenance does not matter because the tires are inflated. There are more components to it.
Ensuring the correctness of your entire stack against an active MITM is significantly more difficult than ensuring the correctness of just a TLS stack against an active MITM.
For signed payloads there is no difference, you're trusting <client>'s authentication code to read a blob, a signature and validate it according to a public key. For package managers that usually only mean trusting gpg - at the very least no less trustworthy than the many TLS and HTTP libraries out there.
Assuming this all came through unencrypted HTTP:
- you're also trusting that the client's HTTP stack is parsing HTTP content correctly
- for that matter, you're also trusting that the server (and any man-in-the-middle) is generating valid HTTP responses
- you're also trusting that the client's response parser doesn't have a vulnerability (and not, say, ignoring some "missing closing bracket" or something)
- you're also trusting that the client is parsing the correct signature (and not, say, some other signature that was tacked-on later)
It's trivially easy to disassemble software to find vulnerabilities like those, though. So it's a lot of trust given for an untrusted software stack.
This is an improvement: HTTP/1.1 alone is a trivial protocol, whereas the alternative is trusting the client's much more complicated TLS stack and its HTTP stack.
For technical reasons, unencrypted HTTP is also always the simpler (and for bulk transfers more performant) HTTP/1.1 in practice as standard HTTP/2 dictates TLS with the special non-TLS variant ("h2c") not being as commonly supported.
> for that matter, you're also trusting that the server (and any man-in-the-middle) is generating valid HTTP responses
You don't, just like you don't trust a TLS server to generate valid TLS (and tunneled HTTP) messages.
> you're also trusting that the client's response parser doesn't have a vulnerability (and not, say, ignoring some "missing closing bracket" or something)
You don't. Authentication 101 (which also applies to how TLS works), authenticity is always validated before inspecting or interacting with content. Same rules that TLS needs to follow when it authenticates its own messages.
Furthermore, TLS does nothing to protect you against a server delivering malicious files (e.g., a rogue maintainer or mirror intentionally giving you borked files).
> you're also trusting that the client is parsing the correct signature (and not, say, some other signature that was tacked-on later)
You don't, as the signature must be authentic from a trusted author (the specific maintainer of the specific package for example). The server or attacker is unable to craft valid signatures, so something "tacked-on" just gets rejected as invalid - just like if you mess with a TLS message.
> It's trivially easy to disassemble software to find vulnerabilities like those, though. So it's a lot of trust given for an untrusted software stack.
The basis of your trust is invalid and misplaced: Not only is TLS not providing additional security here, TLS is the more complex, fragile and historically vulnerable beast.
The only non-privacy risk of using non-TLS mirrors is that a MITM could keep serving you an old version of all your mirrors (which is valid and signed by the maintainers), withholding an update without you knowing. But, such MITM can also just fail your connection to a TLS mirror and then you also can't update, so no: it's just privacy.
Eh? CWE-444 would beg to differ: https://cwe.mitre.org/data/definitions/444.html
> the alternative is trusting the client's much more complicated TLS stack and its HTTP stack.
An attacker doesn't get to attack client's HTTP stack without first piercing protection offered by TLS.
> An attacker doesn't get to attack client's HTTP stack without first piercing protection offered by TLS.
You misunderstand: this means more attack surface.
The attacker can mess with the far more complex and fragile TLS stack, and any attacker controlling a server or server payload can also attack the HTTP stack.
Have you recently inspected who owns and operates every single mirror in the mirror list? None of these are trusted by you or by the distro, they're just random third parties - the trust is solely in the package and index signatures of the content they're mirroring.
I'm not suggesting not using HTTPS, but it just objectively wrong to consider it to have reduced your attack surface. At the same time most of its security guarantees are insufficient and useless for this particular task, so in this case the trade-off is solely privacy for complexity.
Having to harden two protocol implementations, vs. hardening just one of those.
(Having set up letsencrypt to get a valid certificate does not mean that the server is not malicious.)
In comparison, even OpenSSL is a really difficult target, it'd be massive news if you succeed. Not so much for GPG. There are even verified TLS implementations if you want to go that far. PGP implementations barely compare.
Fundamentally TLS is also tremendously more trustworthy (formally!) than anything PGP. There is no good reason to keep exposing it all to potential middlemen except just TLS. There have been real bugs with captive portals unintentionally causing issues for Apt. It's such an _unnecessary_ risk.
TLS leaves any MITM very little to play with in comparison.
and “05/02/2026 - Report Closed as wont fix/out of scope”
I think it’s a bit early to say “won’t fix”. AMD only said that it was out of scope for the channel used to report it (I don’t know what that was, but it likely is a bug bounty program) and it’s one day after the issue was reported to them.
I love how they grouped man in the middle there