A May 18th Washing Post article by about Chris Roberts, the security researcher questioned by the FBI about monkeying with planes’ avionics via the entertainment system, caught my attention. Not because of the sensational headlines, but because of a sentence attributed to “other aviation and security experts.”
In an attempt to make it seem that it is very unlikely to access the avionics from the entertainment system, the article states that “hacking a plane’s engine controls through its entertainment system, they argue, is a bit like controlling a car’s steering wheel through its CD player.” Unfortunately, it is quite possible to control a car’s steering wheel through its CD player. This is due to the electric power steering assistance used on most new cars, and the fact that the CD player and power steering are often both on the CAN bus.
The fact that the CD player in modern vehicles is both often on the CAN bus and hackable is widely known. Noted security expert Bruce Schneier wrote about this topic in 2011. And, of course, once you have access to the CAN bus, you can control other things connected to it such as the electric power steering assistance.
As an example, we can take the modern Ford Mustang. From at least the 2012 model, the power steering has had 3 modes select-able via the instrument cluster screens which are driven by the CAN bus. (See “STEERING FEEL” on page 22 of the owner’s manual.) The CD player is also on the CAN bus, as that is where it gets the dimmer signal. Thus, if you were to hack the CD player, you could then use the CAN bus to control the steering.
In conclusion, I certainly hope that controlling a plane’s avionics via the entertainment system is more difficult than controlling a car’s steering via the CD player.
There are many peculiarities that must be taken into account when considering the safety of industrial systems and SCADA systems. One especially relevant is patching or updating the systems or software that they support. When through a security assessment of this type of system you get to the question: “how do you carry out maintenance of systems to patch known vulnerabilities?” We can find very different answers. Some examples:
Option 1: Poker face
We do not apply security patches. It is not necessary since our industrial network is completely isolated, we rely on the ‘air GAP’ to protect our systems and most vendors don’t publish security updates. On the other hand, sometimes the software upgrade also involves hardware changes, so budgetary constraints don’t permit such updates.
This answer or other similar ones are quite common. I do not think it is a crazy strategy to follow to not apply security patches when these conditions are met:
- A risk analysis was performed to clearly understand what the threats that may affect the non-patched systems and what impact such threats could have. Note that I do not mean to make a superficial risk analysis, but I mean analyzing risks in-depth. That is, know exactly what vulnerabilities are not patched up, how it could be exploited by an attacker and what compensatory measures are implemented to mitigate the risk of not patching it. When considering the threats one should pay particular attention to the perimeter of industrial systems, points of interaction with traditional networks and access points that are easily accessible by visitors or the general public.
- Once this risk analysis is done, if the problems, costs or difficulties that result from applying the patches are greater than the risk of non-patching, it make sense not to apply the patch.
- This decision should be carried out in an informed and conscious way by the risk owner.
- The risk level should be reviewed regularly.
On the other hand, it is clear that we must put pressure on vendors to implement vulnerability management processes for their products and this point should be a key criteria in the selection of these technologies. Continue reading
After taking down the Xbox Live and the Playstation Network gaming services, the Lizard Squad group came again in the spotlight after the leak of a database of their LizardStresser not-so-new booter (a service providing Distributed Denial of Service or DDoS against payment). The breach occurred mid-January and resulted in the release of a 19 MB sql file containing the database content, consisting of about 150000 lines. We analyzed the data, focusing on targets and on attacks. After a bit of scripting, behold!
First, here’s a summary of the quantities of attacks and of attackers, showing that only 1.95% of the registered users actually used the service and launched at least one attack:
Dyre is a banking malware discovered in middle of 2014. It can intercept HTTPS traffic, using techniques documented in this Introduction to Dyreza.
In the context of our review of malware faced by customers, we need to rapidly respond and assess the risk. Dyre is malware found in such context, and we are releasing a Volatility plugin that we are using internally to dump configuration in memory for Dyre (Dyreza) samples.
The purpose of this post is to detail what type reviews will be performed on a linux computer to determine if it meets the security requirements of the PCI DSS standard. To do this, whenever possible, I will detail the commands to use in each case.
However, although the main purpose is to audit compliance with PCI DSS, the proposed revisions can be used as a starting point for any security audit of a linux computer you want to perform.
All the commands in this post have been tested on Ubuntu 12.04 computer, so it is possible that some of them need to be modified to work properly on other Linux distributions. We can use the lsb_release-a command for the exact version of the system we are reviewing, or obtain it from the /etc/os-release. Continue reading
Co-authored by meatwad and adr13n
We attended the 31c3 — a New Dawn conference which took place in Hamburg, Germany from the 27th of December 2014 to the 30th of December 2014.
The conference is still as underground as you expect it to be. A lot of hackerspaces, many 3d printers and an underground geeky atmosphere. Don’t expect bigwigs and other corporations trying to sell you their products, CCC is not about selling and marketing. It’s all about the community, the sharing and above all security in general.
So after some “Club Mate”, we headed to the talks. The following list gives you an overview of the different talks we attended. For a complete list of the talks as well as the schedule, the reader can refer to the links list at the end of this post.
I recently attended two of the largest workshops about hardware security: FDTC and CHES in Busan, South Korea. As usual, lots of new results were presented there.
During the Fault Diagnosis and Tolerance in Cryptography workshop (FDTC), three presentations, including the invited speaker, were about different ways to attack Pairing Cryptography algorithms with fault and side-channel attacks. This indicates that focus has moved to this cryptographic primitive and the security of its hardware implementations. Two papers presented fault attacks against the Miller algorithm which is used to compute pairings. One of these paper showed that combining an initial fault in the Miller algorithm with a second fault to bypass the final exponentiation of a pairing was possible on their target device, a AVR XMEGA-A1 microcontroller.
Two different glitch attacks were presented against microcontrollers. One of them showed that heating the microcontroller helped to induce further effects when clock glitching. In the second paper, a combination of clock glitching and underpowering were applied on both ARM Cortex-M0 and a Atmel ATxmega 256 microcontrollers. The faults obtained were skipped or duplicate instruction executions as well as wrong calculations.
New fault attacks were presented against GOST, SIMON, SPECK, Feistel and Substitution-Permutation networks. Aside from attacks, a new countermeasure was presented to protect RSA implementations against multiple fault injections. The authors made a large effort to formalize fault injections as well as showing equivalence of some fault models. They then showed how to build provable secure countermeasure against high-order fault attacks (multiple faults) with a generic fault model. This may provide a good tool for hardware designers.
During the Workshop on Cryptographic Hardware and Embedded Systems (CHES), the best paper award was given to a team from Tohoku and Kobe. They showed that it is possible to build a detector inside a circuit with standard cells that detects if an electromagnetic (EM) probe is near the die. Usually, we use EM probes to either perform leakage acquisitions for side channel analysis or to inject localized faults using local electromagnetic radiations. If a chipset is equipped with such detector it may hinder such attacks. Their detector was implemented and tested within an AES processor. A video of their experiment was shown during the rump session.
Another team showed that a hardware Trojan construction presented last year which consists in modifying the dopant level of a gate into a circuit, can be detected when the circuit is analyzed with a FIB or a SEM.
A presentation leading to a nice demo (picture on the right) was given by people from Tel-Aviv University and Technion. Their idea was to extract the secret key used during GnuPG encryption only by touching the chassis of a laptop. They successfully demonstrated proof of their attack on stage.
New fault attacks were also presented. A team demonstrated that the countermeasure for AES which was defeated last year has other weaknesses and they proposed a different countermeasure. Another team combined fault attacks with side channel information to attack the AES key scheduler. Meanwhile, they solved an interesting previous open question about the Hamming weight of the key scheduler. They showed that two different keys can have the same key expansion Hamming weight. They provided an algorithm to construct such keys. A side channel analysis of prime number generation was presented by ANSSI. They attacked the prime sieving algorithm before the Miller-Rabin’s tests during the prime number generation. They applied their attack on a smartcard implementation.
An interesting presentation was made about Photonic Emission Analysis (PEA). A team from Berlin performed an analysis of an arbiter-based Physically Unclonable Function (PUF), which is a common construction of timing-based PUFs. The photonic emission principle is simple. Each CMOS transistor can emit photons during a switch of its state. These photons can be observed from the backside of the chips and thus give information about the physical location of the active part of the die. This team implemented its PUF on an Altera MAX V board. For the photonic emission analysis they used a Si-CCD camera and an InGaAs avalanche diode to provide both spatial and timing resolution. With their setup they obtained the timing of some reference challenges and then these timings were used to predict further PUF outputs of given challenges and finally clone the PUF.
To sum up, more and more complex and combined attacks are being realized and the theory behind them is becoming more fully understood. This progression is resulting in hardware attacks which are practical and sometimes devastating to the security of certain products. Links are are being made between previously separate domains to achieve practical attacks. I saw lot of amazing presentations and I had really nice discussions there. I hope to see you next year in Saint-Malo.
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.