Most tech gadgets are a distraction and are about as useful off as on.
Industrial stuff sure, but if someone's internet fridge or smart TV goes haywire, so what.
I don’t really care if the IoT device is compromised; I do care if it is used to compromise my trusted devices.
Please read before commenting.
I love and care pretty deeply about security, I run Graphene OS & am trying to switch to Genode / Sculpt sometime this year. I care. But I'm coming to realize that for the 'personal computing' use-case it doesn't matter much for people who aren't nerds.
In any case, the sort of "hardware root of trust" reccomended is an insidious way of going about it.
If large numbers of fridges or cars fail to work properly or go haywire at the same time, it matters.
The article also mentions enabling DDOS.
The same with things like routers, any type of infrastructure.
We have increasing amounts of electricity generation and storage moving to consumer devices. Its a good idea and could make the system more resilient if done right, but if it relies on insecure devices it would be a huge systemic vulnerability.
When does a device not authenticating itself and all it's comms matter?
Mossad was able to (at great cost, and with an inept enemy) hijack a telecom supply chain to embedd explosives in internet radios.
Were there any operational oversight to simply have these devices call home with signed key challenge, the literal virus OS/code execution/explosive would have at least been caught.
I only bring up THAT example to show that supply chains are as exposed as a bad actor wants them to be.
Or a distributed attack, turn on all ACs of a city, and overload the power grid...
From the person who thought the sale was ownership. More often, "sale" is 'trade green paper for a license of this physical good, that they retain to do whatever with later at their leisure'.
Look at the scam Nintendo is doing with the Switch 2:
Games no longer have any data other than a serial number to download a game.
Hi tendon claims they can remotely destroy consoles they deem 'modified'. Not 'removed from online play', actually full digital destruction of device.
I support ownership, not this 'we may revoke at any time' licensure.
This is illegal in many jurisdictions.
There was a lawsuit, in which class action plaintiffs got between 9$, or $55 if you said you used OtherOS.
But if this was any of us, hacking Sony, we would be rotting in prison. But Sony can hack millions of peoples supposed property, and "here's your latte money, too bad, so sad!"
This should have been a criminal trial, in which the executives would go to prison, firmware should be rolled out to reenable it, and make people actually whole.
So yeah, I fully expect Nintendo to remotely destroy hardware, and get away with it. The concept of "you don't own, but you license" is the toxic shit that allows this, alongside DMCA 1201.
The laws should support such a charge, but they don't. At least not everywhere.
If you knowingly use obsolete and insecure devices, you might end up being liable for their actions.
Even in the industrial space, it's simply not going to happen given the costs of securing wrt the business value and relative risk aside from projects that have some sort of community/municipal/national "critical infrastructure" domain (think power generation) where a legitimate business case (think in terms of MARR https://en.wikipedia.org/wiki/Minimum_acceptable_rate_of_ret... ) can be made for safety and/or regulatory compliance future-proofing and a source of cheap 3rd party (usually government) capital is made available.
There are also huge legacy IoT investments in capital inefficient low-margin businesses (think mining and ship-building) where this sort of "maintenance" and retroactively applying "defense in depth" makes little sense compared to wholesale generational migration.
Nobody is going to do it until there's some sort of abrupt forced exigent existential externality applied to the entire industry (e.g. ozone layer depletion and refrigerants) and then they'll be a single mass-migration.
As with climate change, "we all will" and "with great difficulty, far too late".
* Government regulation.
* Insurance requirements. Mostly applies to businesses.
Talk about the worst corporate doublespeak - 'trusted computing'.
It also goes by DRM, or rental hardware, or you never actually own it cause someone else retains permanent digital control.
There is NO trust here, only control and power in never actually selling anything.
And since we're talking of IoT, this goes hand in hand with proprietary corporate clouds, anti-FLOSS like Home Assistant, rental in the form of sales, forced firmware upgrades that remove previous features to gatekeep and resell what you promised.
I don't even need to read further. Anybody, and I do mean anybody, who uses the moniker 'Trusted Computing', should be ignored, blackballed, and relegated to the bin of computing.
Could you expand on this?
There is a reason why for my IoT setup so far, its been primarily OpenGarage, IKEA IoT (ZigBee), and existing compute services I locally host.
Remote servers come and go. Companies come and go. If I am at the forced behest of some company to 'bless' my hardware, its not mine. And I think its fraud to even claim it a sale. Its a 'rental as long as the company wishes'.
In my area, tornado sirens are unencrypted aand a simple recordable and replayable frequency. The cost to add an encrypted radio connection is $100k for the base station, and $25k per siren. There are 80+ sirens.
If this were open source, then a simple computer could he retrofitted to do this. But because they are highly proprietary, the county would be on the hook for $2.1M just to defend against an asshole with a HackRF.
FLOSS and open principles should matter to governments as well as individuals. Trading temporary easiness for no long term usability is utterly ridiculous. And you end up with a doorstop in the end either way.
- Devices requiring Internet access for functionality that could have been done locally
- Hardware SDKs which are basically abandoned forks by manufacturers so IoT companies ship stone-age kernels and device drivers
- The usual stuff: too much complexity, lack of tests, bad documentation, meaning old parts of the software get forgotten (but remain exploitable)
Theoretical waxing about trusted computing and remote attestation does seem disingenuous when problems with non-certified firmware is probably not even in the top 10 in the real world. Notice how the article author mentions some scary attacks but conveniently omits how the attackers actually gained access?
Of course we should secure IoT, but the article is about one very particular kind of security: roots of trust. The idea is that devices shouldn't run unsigned software, so forget about custom firmwares, and generally owning the hardware.
There is a workaround, sometimes called "user override", where the owners can set their own root-of-trust so that they can install custom software. It may involves some physical action, like pushing a switch, so that it cannot be done remotely by a hacker. But the article doesn't mention that, in fact, it especially mentions that the manufacturer (not the user) is to be trusted and an appropriate response is to reset the device, making it completely unusable for the user. Note that such behavior is considered unacceptable by GPLv3.
There are some cases where it is appropriate, GPLv3 makes a distinction between hardware sold to businesses and "User Products", and I think that's fair. You probably don't want people to tinker with things like credit card terminals. But the article makes no such distinction, even implying that consumer goods are to be included.
If anyone could straightforwardly install the latest DD-WRT or similar then it's solved, because then you don't have to replace the hardware to replace the software, and the manufacturer could even push a community firmware to the thing as their last act before discontinuing support.
This should be held in escrow before the device can be sold. And the entity doing the escrow service should periodically build the software and install it onto newly-purchased test devices to make sure it's still valid.
If the company drops support, either by going out of business or by simply allowing issues to go unaddressed for too long, then the escrowed BSP/firmware is released and the people now own their own hardware.
You also need the community around the device to already exist on the day support is discontinued instead of needing to build one then around a device which is by that point years old and unavailable for new purchase.
We made EMRs in the 2000s. Our customers required everything to be placed in escrow. It seemed abundantly prudent to me.
Maybe even prescient; our startup was bought, then murdered in its crib, leaving our customers SOoL. But at least they got the source.
We need schemes which enforce security and which make long term economic sense. I would require software escrow for all companies to ensure a bankruptcy doesn't mean all software is lost.
Their code is bad. It should not be used. They should not even write it to begin with. Just ship the device with existing open source code with the minimum -- and published -- modifications to make it run on your device, and focus on being a hardware company.
I'm not even being sarcastic. Most of them aren't that hard to hack now as it is; I know a guy who broke at least two devices in under an hour each because that's how bad they are. A piece of junk that goes out today that maybe still flies under the radar and nobody bothers to hack it isn't going to fly under the radar in a world where there's 10, 20, 50 times more "software engineering" power in the world, in the hands of a lot more people. In 5 years those things are going to be a nightmare for their owners, for their manufacturers, for all kinds of people.
Adding to this that the scale requirements for implementing these standards varies by about 10 orders of magnitude between the smallest and largest companies, which is such a large scale difference that it becomes qualitative with respect to standards.
This creates divergent incentives. Many companies want to optimize standards for the cheapest thing that will check a box, even if the implementation is ineffective in practice. The minority of companies that actually care about robust and efficient implementation often find this isn’t feasible (and in some cases impossible) within the constraints of what ends up in many of these standards, so they ignore the standards since they are strictly worse than whatever non-standard thing they end up doing.
Wash, rinse, repeat. I have participated in IIoT standardization efforts for almost two decades and it is the same vicious cycle irreconcilable requirements every time.
The only long term viable approach for IoT security is to not allow these devices on the Internet in the first place. Have the WiFi Access Point, or some other gateway, act as the broker for all information, and the default is each device sees nothing until given permission. *
Whenever this comes up people raise the point that this won't work because it disincentivizes making devices to slurp data, but it's not like that ecosystem actually exists at all, with the exception of smart TV which hardly counts as IoT. Consumer IoT hasn't taken off because consumers are rightly paranoid about bait-and-switch and being left with useless devices in the walls of their homes.
* This is roughly what https://github.com/atomirex/umbrella is trying to head towards, hence seeing if a $50 AP can act as a media SFU, and learning it totally can.
Though as we move towards things like virtual power plants and more integrated systems with home batteries, etc. the use case is clearer.
Ye it is about that simple. IoT don't need the I. Given how low my trust is for vendors I wouldn't even be happy with a separate no internet wifi since the devices can hook up to some other wifi.
Certainly no multicast or anything like that.
Not everywhere is going to have WiFi. A SIM can use a private APN that selects the PGW/IP/Network range and from the ISP usually has a VPN to your network. Does not go "over the internet" at all.
This is (usually) how industrial IoT, connected cars etc work. Shipping containers cant rely on WiFi.
“Security is the ‘s’ in IoT” was an old joke back then. Still a problem but hardly a new one.
We have Boeing-level incidents daily, except that it can be swept under.
Do not buy them and avoid built in ones at all costs.
Maybe at some point newer ones can be secured, but I doubt it.
Nevermind thousands have already be installed. People cannot be bothered to update their phones or computers, those updates are rather easy. Who will be bothered to update their IOTs ? Plus 2038 is coming quickly, good luck to people who have 32bit IOTs. Do 64bit IOTs even exist ?
>it’s up to system integrators, who are responsible for the security of an overall service interconnecting IoT devices, to require the features from their suppliers, and to coordinate features inside the device with external resilience and monitoring mechanisms
So we give integrators access to our devices ? How is that security ? Also what if these integrator's company is purchased or goes out of business ?
"You by device that does a, b, and c"
Device updates.
"You now have device that does a, b, and d"
You ask what happened to c?
"You need to buy a new device to get feature c"
Supply chain attacks concerns are infinitesimally smaller than manufacturers writing buggy, unmaintained software / firmware.
We should instead argue for open software / firmware, or at least local control only (if desired). Also solves the problem of billions of e-waste devices whenever manufacturers go under or shutdown their servers, or moves them behind new paywalls / adwalls.
However, by career I deal with a lot of embedded devices. Embedded is where the hell begins. Embedded plus standards is pretty bad. SCADA and others. Embedded plus standards plus vendors .. now you are doomed. The half-assedness of embedded systems security is worse than any normal coder can imagine.
Basically the IIOT domain is the one place that remains where network segmentation and encirclement and so on are the only things we have that work. The worst best option.
Also, with tools like Chip Whisperer (https://www.newae.com/chipwhisperer) the physical security of the hardware root of trust needs to be reevaluated.
It's obvious to me that this model is unacceptable.
A better choice is to ensure that internet stacks are secure in hardware, not software. This is how power is distributed. Internet connectivity could be secured the same way.
Then you could use anything you want without worrying about confused software or hackers, in the same way you can plug in anything to any outlet without worrying about burning the house down.
Yes, actually. I also want the groceries delivered to my door using the services and stores of my choice depending on the items and prices, and for it to integrate seamlessly with my Home Assistant instance without any funny business regarding the API or needing to install yet another bullshit app I didn't ask for.
This is the real reason we don't have nice things. The implementation is always botched by businesses who don't do a thorough job and finish their product. There's clearly a huge chasm between what the customer and the IoT business would consider an MVP. Do it right or don't do it at all.
It's complete nonsense, and the only workaround is to install your own certificate authority on your network and add it as a trusted root cert. Imagine buying a device from amazon and being told to add their root ca to your machine. No non-tech person should ever be touching root ca's, and is 10x more dangerous than whatever this insane policy is trying to protect us from.
I believe all IoT devices should exist beyond a virtual airgap, and the router should by default, prevent the device from communicating with the internet. In order to send data to-and-from the cloud, it should pass through some kind of intelligent/auditable gateway. One that the router maintains, and can be updated independent of the devices.
HideousKojima•1d ago
Havoc•1d ago
No much, but there is some. Getting rid of default admin/1234 passwords was 100% a regulatory push
Small but important win
thawawaycold•1d ago
HideousKojima•1d ago
number6•1d ago
If you still sell EOL Products, you have to make sure it is still save, even as distributor.
Take control away from the end-user is a good point, I will keep this in mind.
number6•1d ago
I think the CRA is the right step in the right direction. Companies can finally be fined when they sell a product that has known vulnerabilities.
This is something that is discussed for years - now we have a definite Law.
And we already see changes: if you install Windows, the first thing it does is to get patches and the start over.