Is It Time to Change Our Approach to Security?

by Cr0wTom

If you try to remember how everyday life was in 1984 (the year 2600 was founded), most of you will not remember at all, with some of you not even born at the time.  This was when the "digital" space was kicking off, and a new generation of hackers started appearing.  People with passion about technology, and creating and destroying things.  But from this small collective, it started becoming a whole industry, until we reached today and an era where we see one critical RCE 0day after the other.  But in order to see this amount of 0days, we need technological advancements and wide adoption of them from users.  Which is the reality of today.  Despite our expectations for flying cars and ovens that will take raw materials as input to output ready-made dishes, our technological leaps are enormous,and you don't need me to prove it to you.

Just look at your pocket, your garage, your TV, or even your toothbrush.  Our life is getting more and more connected.  With the excuse of "efficiency" and "practicality," companies got (almost) all our devices connected to the Internet.  And this is not a bad thing, but "with great power comes great responsibility."  It is one of the biggest clichés ever, and it applies perfectly in this case.  Companies want us to use their connected products and services.  They need us to do it and they will do everything for it.  Unfortunately, most of the time important aspects of the product development cycle will get bypassed, with one of those aspects being safety.

Safety Critical Devices and the Path to a Better Future

You might not think about it that way, but what will happen if someone hacks and disables your fire alarm?  What about your fancy Roomba, which happens to mop your whole apartment?  Your car, which you expect to act "smart" and "assist" you with its Advanced Driver-Assistance System (ADAS)?

You guessed it right.  If some of these (or thousands of other) devices are developed with weak security, or even in cases where a product gets rushed into market with the mindset that it will be finished and polished at a later stage (yes Elon, I am looking mainly at you), then the impact is not only on the security side of things, but also on the safety, with possibly devastating results.

Should this have been considered when evaluating security findings?  Should it potentially increase the severity and the impact of those findings?

Our answer is not clear.  It comes mainly from the automotive sector, where safety can be the most impactful characteristic with connected and autonomous vehicles already in the wild.  But what we are sure of is that a reevaluation of the scoring systems has to be performed.

Different versions of Common Vulnerability Scoring System (CVSS) as an example, are released and embraced by security professionals and security-oriented product teams.

But is that enough?

Case Study

Unfortunately, I cannot talk about specifics.  But I will give you an example of an OEM in the automotive industry where I was called to perform a complete security assessment on their product.  Following standard testing methodology targeting the testing unit, I found several security "issues" that an unauthenticated user could trigger to perform physical actions in the vehicle (e.g. gas, brakes, etc.).  Those findings were applicable only with physical access to the vehicle, which meant that an attacker had to physically access it to perform the attack, but after the initial foothold, all the actions could be performed remotely.

On this assessment though, we were "forced" to use the beloved CVSS scoring, which did not reflect a really important aspect: the physical safety of the driver and passengers.  As a researcher, I can accept that a rating and a standard have to be used in order for all the parties to have a common understanding of the severity of the issue.  But big corporations use these ratings and, depending on the policies, they reflect it on the final decision of "if" and "when" they will mitigate this finding.

Back to the actual finding though, the OEM took the resulting CVSS rating and chose to not mitigate the issue in the end, regardless of the safety implications...

As a researcher, my ultimate goal is to make the world a safer place.  I tried to explain in detail how this finding can be used in an exploit chain, and how all the other interfaces that are connected to this functionality can be compromised and result in devastating outcomes.  But security ratings have their place and huge corps do not (and will not) change their policies overnight.

Should There Be a Shift?

How should I feel now?  Is it my problem if the brakes engage when the car is running at 120 kilometers an hour?  Is it my problem if the assisted driving fails completely at the same speeds?

Yes and no, and that's why I am here writing this article.  We need to start thinking about security in a different way.  We need to start approaching exploit chains with safety in mind.  Data, privacy, integrity, exploitability and everything is good, but we have to make sure that people will not die out of outdated practices, beliefs, policies, and cut corners (now I am looking at you Boeing).

Let's make the world a happier and safer place.  There is still a chance.

Disclaimer:  The finding got fixed, but we are sure (and we know) of many occasions in which companies act irresponsibly regarding critical safety components.  Many times we find ourselves having to defend our findings in cases where we should not have to.  Automotive and safety critical industries are new to the connectivity game and some mistakes will be made, and that's why we, as professionals, should be here to help them create better and safer products.

Return to $2600 Index