Laughs in OpenWRT
This is stupid. Their justification is an “unusual degree of vulnerabilities.”
So why not outlaw vulnerabilities? Impose real fines or jail time, or at the very least a civil liability that can’t be waived be EULA. Better than an unconstitutional bill of attainder.
So why not outlaw vulnerabilities?
Of course! If we make vulnerabilities illegal, then all the programmers will make perfect software! The solution was so easy!
There is definitely a difference in quality when talking about import software.
Also, “outlawing vulnerabilities” would not mean to simply assume everyone starts making perfectly secure software, but rather that you’re fined if you can’t prove your processes are up to spec and you adhered to best practices during development. Additionally, vendors are obliged to maintain their software and keep it secure.
And surprise, surprise, the EU ratified laws that do exactly that (and more) recently. In fact, they’ll be in effect very soon:
Outlaw vulnerabilities? Do they just get little virtual handcuffs when they’re found? If I find a Microsoft vulnerability I get arrested? Not sure I’m following this one.
Edit: it’s really obvious most of you haven’t worked in infosec.
When WannaCry was a major threat to cybersecurity, shutting down banks and hospitals, it was found that it used a backdoor Microsoft intentionally kept open for governments to use.
https://en.wikipedia.org/wiki/WannaCry_ransomware_attack
EternalBlue is an exploit of Microsoft’s implementation of their Server Message Block (SMB) protocol released by The Shadow Brokers. Much of the attention and comment around the event was occasioned by the fact that the U.S. National Security Agency (NSA) (from whom the exploit was likely stolen) had already discovered the vulnerability, but used it to create an exploit for its own offensive work, rather than report it to Microsoft.[15][16]
https://en.wikipedia.org/wiki/EternalBlue
EternalBlue[5] is a computer exploit software developed by the U.S. National Security Agency (NSA).[6] It is based on a vulnerability in Microsoft Windows that allowed users to gain access to any number of computers connected to a network. The NSA knew about this vulnerability but did not disclose it to Microsoft for several years, since they planned to use it as a defense mechanism against cyber attacks.
In real life, if I do not prevent someone from doing a crime that I am aware of was premeditated, I am guilty of not doing my duty. Corporations are people thanks to Citizens United, and governments are ran by people, so uphold them to the same standards they subject the populace to.
If you are Microsoft, then yeah. You’d go to jail when a Windows vulnerability is found.
In all seriousness though: it would be more likely to be just a civil penalty, or a fine. If we did want corporate jail sentences, there are a few ways to do it. These are not specific to my proposal about software vulnerabilities being crimes; it’s about corporate accountability in general.
First, a corporation could have a central person in charge of ethical decisions. They would go to prison when the corporation was convicted of a jailable offense. They would be entitled to know all the goings on in the company, and hit the emergency stop button for absolutely anything whenever they saw a legal problem. This is obviously a huge change in how things work, and not something that could be implemented any time soon in the US because of how much Congress loves corporations, and because of how many crimes a company commits on a daily basis.
Second, a corporation could be “jailed” for X days by fining them X/365 of their annual profit. This calculation would need to counter clever accounting tricks. For example some companies (like Amazon, I’ve heard) never pay dividends, and might list their profit as zero because they reinvest all the profit into expanding the company. So the criminal fine would take into account some types of expenditures.
Presumably that, once exploited, vulnerabilities are an offense that the DOJ can fine the company for. I think that’s quite reasonable.
I’d go further, an unpatched vulnerability is offense that the DOJ can fine the company for
Because the NSA, CIA, and FBI love them. Vault 7, Magic Lantern, Intel ME and AMD PSP, Dual elliptic curve, COTTONMOUTH-I, ANT/TAO catalog, etc.
Hell, Microsoft willingly reports vulnerabilities and exploits to the government for them to use.
North Korea wishes it had this level of control on the goods its citizens willingly buy.
Why not?
Well…
It discourages self-reporting, makes vendors hostile to security researchers, opens the door to endless litigation over whose component actually “caused” a vulnerability… encourages CYA culture (like following a third-party spec you know is bad rather than making a good first-party one, because it guarantees blame will fall on another party)
In a complex system with tight coupling, failure is normal, so you want to have a good way to monitor and remedy failure rather than trying to prevent 100% of it. The last thing you wanna do is encourage people to be hostile to failure-monitoring.
(See also: Normal Accident theory)
What routers are trustable?
American Alphabet Soup backdoors good, Non-American Alphabet Soup backdoors bad.
We could just ban the idea “companies that have open vulnerabilities for corporate and government use” but that would benefit every citizen of every nation, so no.
If there’s a backdoor for the FBI, there’s nothing to stop Russia and China to also not use it. Same for a Chinese backdoor, nothing to prevent America from figuring it out. It’s why China bans American companies, and we’re phasing out Russian and Chinese companies.
It’s impossible for an open door to know who’s using it, and keys for a closed one can be copied and leaked. The safest way to garuntee noone else uses a backdoor, is to not have a backdoor.
If you’re not afraid of picking up a wrench yourself:
I just switched to an OPNSense router on protectli hardware.
You don’t have to use something like that to use OPNSense though, you can just put it on nearly any old machine with a couple of nics. The out of the box config isn’t terrible and you can find a ton of guides on how to set yourself up securely.
I’ve been using DD-WRT for many years and just moved to OpenWRT. Although there have been various generic vulnerabilities that effected all IP devices and needed to be patched on these platforms too, I can’t remember a single vulnerability that was specific to either DD-WRT or OpenWRT.
The ones that you build yourself and load with free & open source software. Basically any x86 PC or even ARM SBCs like the Raspberry Pi can work as a router, as long as you have 2 separate network interfaces. There are quite a few FOSS router/firewall operating systems like OpenWRT, dd-wrt, pfSense and OPNSense (my personal favorite). If you don’t want to do this yourself, there are companies like Protectli that offer dedicated pre-built hardware that’s ensured to be compatible with pfSense/OPNSense and comes Coreboot pre-installed.
In a statement cited by Reuters, TP-Link reportedly claimed that it does not sell routers in the U.S. In May, the company announced it had “completed a global restructuring” and that TP-Link Corporation Group — with headquarters in Irvine, California and Singapore — and TP-Link Technologies Co., Ltd. in China are “standalone entities.”
Son of a bitch. I just bought a TP-Link Omada wireless access point. I wonder if they’re in the same category. The article doesn’t go into that level of detail.
Yeah I’ve got a handful of switches and a WAP from them… I somehow never realized they were out of the PRC. Will probably shift away from their stuff now.