Hacker Perspective: Tiffany Strauchs-Rad

Not many 12-year-olds in the late 1980s and early 1990s had 23 telephone lines going into their middle class homes in Great Falls, Virginia in the suburbs of Washington, D.C., but I did.  The neighbors thought that we may have been bookies running illegal gambling from our basement, but with a father who was a former operative with the CIA, they did not bother us.  No matter the hour, my brother Karl and I would respond to the low-toned beeping requests from users to speak to the sysop.

We took turns sharing phone numbers, playing MUD games, and chatting with the users of the bulletin board system that was operated from a room in our home.  Back then, when the Internet was still in its infancy, we would cold-dial phone numbers that were shared amongst users.  Where it took you - pre-enforcement of the Computer Fraud and Abuse Act - was always an exciting adventure.  We were kids and not interested in malicious hacking, but in making friends online and playing games.

I remember the beginning of the adoption of TCP/IP, the birth of the World Wide Web, and sending my first email to a friend at MIT.  When dialing the numbers and, patiently, waiting for the 8-bit pictures to slowly appear on the screen, 20 minutes for a single page to render was well worth the wait.  I got a glimpse, from inside my home, of someone else's creation - someone else's world.  The unknowns, such as who would respond to sysop chat requests and what files and games other systems contained, were exhilarating.

My vulnerability researches began from those adventures delving into how systems worked and were networked.  At the time - at the ages of 10 and 12 - my brother and I were the greatest competition to CompuServe and AOL.  I remember asking AOL administrators, "When will you be getting that new thing... email?"  The representatives of those companies did not know we were kids and directly challenging their companies for a few years, but, without capital funding, we were not able to compete and those companies took our users.  The BBS morphed into an ISP around 1990, but when Time Warner and the big telecoms came into the scene, we were forced to become users of their system instead.

While I was an undergraduate at Carnegie Mellon University, the Computer Fraud and Abuse Act (CFAA) was amended to include stiffer penalties and stricter definitions into what was "unauthorized access" - and this allowed minor-aged hackers to be tried as adults for some computer crimes.  This technical knowledge base of hackers could now be on the receiving end of escalated charges, akin to possessing a weapon or having advanced offensive skills.  The lawmakers theorized this knowledge base would have increased the likelihood of defendants understanding the ramifications for allegedly criminal actions, thus justifying an increase in the penalties.

College was the first time I met other hackers.  During our summers, I chased level 4 Hot Zone viruses to Patient Zero in the jungles of South America while they were interning at Microsoft and chasing zero days of a digital kind.  It was not until I read my first issue of 2600: The Hacker Quarterly and went to my first hacker conference, DEFCON, in Las Vegas in 1999 that I discovered that there were many out there who shared my affinity for figuring out how things worked and how viruses and worms spread, and who also shared an interest in designing better things.

The first hacking project in which I participated was accessing car computers.  A hacker named Nothingface showed me that even if a system was locked down with intellectual property and digital locks, if it was on a device he owned, he wanted to know what it did, how it operated, and if it stored information about him.  He hacked his SUV for off-roading purposes.  The last thing you want while off-roading in the backwoods of Washington State would be for your airbags to go off or for your anti-locking brakes to thwart a rocky hillside descent.  He inspired me to look into issues beyond technically what could be done and taught me a lot.

I learned that most things could actually be done.  However, how to tell people about what you have done without being implicated as a criminal or an intentional violator of intellectual property was a different matter.  We were not malicious hackers.  We were security researchers and weekend mechanics, but we had stumbled upon some things related to public safety and privacy that we wanted to share.  We started the OpenOtto project for car hackers and then I went to law school.

If you wanted to study technology law or computer security in law school near the turn of the century, the only classes that were even close were contracts, intellectual property, and criminal law.  I took all of those basic classes at the University of Maine School of Law, one of the first law schools in the U.S. to have a technology law center, and drove 1.5 hours each way, twice a week (in the snow!), to Franklin Pierce Law School in Concord, New Hampshire to take one of the first cybercrimes law classes in the country.

I was disappointed that the hacker spirit/mentality was now a negative term thanks to the media's misuse of the word "hacker."  I remember introducing myself as a "hacker" in the cybercrimes class.  A hush, and then whispering, came over the lecture hall.  Twelve years later, this is still the general reaction I get from the legal community, but, even back then, there were some who got it: my law school instructor was a fellow graduate of Carnegie Mellon University and, though a decade older than me, he appreciated what it meant.  His class shaped my career.

The Digital Millennium Copyright Act (DMCA) was passed by President Clinton during my first year of law school.  While we would be taught why it was strong intellectual property protection for digital media, I wanted to talk about the chilling effects it would have on the computer security community and about the case of MPAA vs. 2600.  I also read about Kevin Mitnick's and Bernie S.'s harrowing experiences with the judicial system.  After reading their cases, it inspired me to stay in law school and work to make changes in what I viewed as a judicial system that did not yet have the technical understanding of the elements of computer crimes.  If it matters how a break-in occurred in the brick and mortar world, then the elements of how it was done using ones and zeroes should matter just as much.  Additionally, expertise in preserving that digital evidence for trial should matter as much, too.

With the enforcement of the DMCA, in addition to severe civil penalties and fines that could be imposed on an infringer, there were stiff criminal penalties if they "circumvented anti-circumvention measures."  I posed this question to my law school classmates: "Will this legislation, potentially, have the unintended consequence of making computer security worse and stifle free speech?"  The answer I got was that it was intended to protect people's work, not to stifle research or encourage slapping on a weak crypto "anti-circumvention measure" to trigger the DMCA, rather than spending resources on better computer security and more rugged code.  The academic and fair use clauses protected that, right?  But, in practice, would it work that way?  I do not think it did, and now, with new proposed legislation like the Stop Online Privacy Act of 2011, we must address these issues again.

Twelve years later, I am writing this as I fly to the West Coast to evaluate significant security vulnerabilities in SCADA/ICS systems.  In the summer of 2011, my father, John J. Strauchs, along with exploit writer Teague Newman and I, invested $2500 of our own money and two months to do private research in a basement in the D.C. area that showed that we could open jail and prison doors - while suppressing alarm systems - from outside the facility by taking advantage of known Programmable Logic Controller (PLC) and physical security vulnerabilities.  Our disclosure to the U.S. federal government took a while.  Directors from four federal agencies were called in for a meeting with us.  We did a bare-bones presentation to alert the feds of the possibility that their assets - beyond just correctional facilities - may also be vulnerable to an attack similar to the one we designed.

After my plane lands, I'm returning a call to a person who has discovered a significant security flaw in the telecom system in Washington, D.C.  Some researchers have been raked over the coals for disclosing their research publicly and have become targets of DMCA and/or CFAA allegations.  Worse yet, some are "outed' to the FBI as "criminal hackers" if they tell the vendors or U.S. government of their research.  How to disclose the results of information security research is as crucial to the industry - and to the researcher - as what has actually been discovered.

When I am contacted by an individual, I ask the following questions to determine how to strategize their disclosure:

  • What is the scale of the discovery?  For example, is the Personally Identifying Information (PII) of only a few people at risk or could you, potentially, take down an entire network crucial to public safety?
  • Did you have authorized access to the system/device?
  • Did you break any cryptographic protections, brute force, or otherwise circumvent any security measures to make your discovery?
  • Are you under a nondisclosure agreement or do you have a U.S. national security clearance?
  • Do you want your name to be associated with the release or do you wish to stay anonymous?

From this point, I will help this security researcher make his decision of how to alert telecom in D.C. that they have a big problem.

The best part of the work I do is that I see the newest private sector security research before most people do.  The most difficult part, at times, is the knowledge I have of these vulnerabilities.  I know that many will not be patched quickly, or at all, and, by understanding the ramifications of the exploit, that knowledge can be a difficult burden to bear.  Often, finding a receptive vendor or government agent to report it to is a challenge.  When told of serious vulnerabilities or exploits, more often than not, they initially take an approach of denial about the validity of the information: "It can't really be possible to simply increment a number at the end of a URL and get the PII and credit card numbers for up to 30,000 customers from a large retail chain in California, is it?"

"Yes, it is," I answer.  Following this, the response often includes demands to know the identity of the researcher and sometimes threats of law enforcement taking action if names are not given, to which I reply, "It shouldn't be important to you who discovered this, but that they wanted to tell you first."

Disappointingly, even after the dance between "show us the proof" and "who did this," many times the vulnerability is not patched.  To be fair, some cannot be patched quickly as is the case with industrial control systems.  In turn, some researchers have chosen the zero day route in which the vendor is given no warning about the vulnerability or exploit, but the details are dropped anonymously (or not) and they must scramble to patch.  The decision of how, or if, to share security research is one that only the researcher can make, but I encourage researchers to evaluate the severity of the risk in addition to ethical and legal ramifications.

My excitement in doing this work, nevertheless, is the same that existed for me during the era of the BBS and my introduction to computers.  Now, instead of dial-up taking forever, everything is immediately accessible via portable devices I carry on me at all times.  Obtaining information, coordinating efforts with other hackers, and telling the public of our research results can be done instantaneously without geographic borders or citizenship, and with anonymity - if one so desires.

This new realm is different than the one I knew as a child; privacy expectations and protection laws have loosened and criminal sanctions for unauthorized access have been enhanced.  However, at the same time, a borderless digital world in which one can be a part of something that is vast, organized, and sophisticated is a reality that is new to me.

The Internet has grown up, as have I, and I revel in the excitement of the unknown and the challenge to ascertain how things work as much now as I did then.  In an industry in which things become obsolete quickly, it's rapid change that keeps me - and my ambitions - young.  I love what I do and am appreciative of all the hackers, teachers, and even some very smart and ethical guys in law enforcement who helped me get to where I am today.

I encourage you to responsibly learn these skills and share with the next generation what is not [yet] taught in schools.  It is your knowledge and efforts that will change how information is shared, how "security" is defined, how ownership of intangible property is understood, and if online freedoms will be upheld or will wither.  As hackers, we belong to a community greater than just where our country's passport can take us.  I implore you to preserve that and responsibly explore those exciting unknowns.

Tiffany Strauchs-Rad, BS, MBA, JD, is the president of ELCnetworks, LLC, a technology development, law, and business consulting firm with offices in Portland, Maine and Washington, D.C.  She is also a part-time adjunct professor in the computer science department at the University of Southern Maine, teaching computer law, ethics, and information security.  Her academic background includes studies at Carnegie Mellon University, Oxford University, and Tsinghua University (Beijing, China).  She has presented at Black Hat, DEFCON, HOPE, and CCC conferences.

Return to $2600 Index