Bruce Schneier on Sun, 15 Mar 2009 06:44:12 -0400 (EDT) |
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
CRYPTO-GRAM, March 15, 2009 |
CRYPTO-GRAM March 15, 2009 by Bruce Schneier Chief Security Technology Officer, BT schneier@schneier.com http://www.schneier.comA free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-0903.html>. These same essays appear in the "Schneier on Security" blog: <http://www.schneier.com/blog>. An RSS feed is available.
** *** ***** ******* *********** ************* In this issue: Perverse Security Incentives Privacy in the Age of Persistence News Insiders The Doghouse: Singularics Three Security Anecdotes from the Insect World The Kindness of Strangers New eBay Fraud Schneier News IT Security: Blaming the Victim Balancing Security and Usability in Authentication Comments from Readers ** *** ***** ******* *********** ************* Perverse Security IncentivesAn employee of Whole Foods in Ann Arbor, Michigan, was fired in 2007 for apprehending a shoplifter. More specifically, he was fired for touching a customer, even though that customer had a backpack filled with stolen groceries and was running away with them.
I regularly see security decisions that, like the Whole Foods incident, seem to make absolutely no sense. However, in every case, the decisions actually make perfect sense once you understand the underlying incentives driving the decision. All security decisions are trade-offs, but the motivations behind them are not always obvious: They're often subjective, and driven by external incentives. And often security trade-offs are made for nonsecurity reasons.
Almost certainly, Whole Foods has a no-touching-the-customer policy because its attorneys recommended it. "No touching" is a security measure as well, but it's security against customer lawsuits. The cost of these lawsuits would be much, much greater than the $346 worth of groceries stolen in this instance. Even applied to suspected shoplifters, the policy makes sense: The cost of a lawsuit resulting from tackling an innocent shopper by mistake would be far greater than the cost of letting actual shoplifters get away. As perverse it may seem, the result is completely reasonable given the corporate incentives -- Whole Foods wrote a corporate policy that benefited itself.
At least, it works as long as the police and other factors keep society's shoplifter population down to a reasonable level.
Incentives explain much that is perplexing about security trade-offs. Why does King County, Washington, require one form of ID to get a concealed-carry permit, but two forms of ID to pay for the permit by check? Making a mistake on a gun permit is an abstract problem, but a bad check actually costs some department money.
In the decades before 9/11, why did the airlines fight every security measure except the photo-ID check? Increased security annoys their customers, but the photo-ID check solved a security problem of a different kind: the resale of nonrefundable tickets. So the airlines were on board for that one.
And why does the TSA confiscate liquids at airport security, on the off chance that a terrorist will try to make a liquid explosive instead of using the more common solid ones? Because the officials in charge of the decision used CYA security measures to prevent specific, known tactics rather than broad, general ones.
The same misplaced incentives explain the ongoing problem of innocent prisoners spending years in places like Guantanamo and Abu Ghraib. The solution might seem obvious: Release the innocent ones, keep the guilty ones, and figure out whether the ones we aren't sure about are innocent or guilty. But the incentives are more perverse than that. Who is going to sign the order releasing one of those prisoners? Which military officer is going to accept the risk, no matter how small, of being wrong?
I read almost five years ago that prisoners were being held by the United States far longer than they should, because "no one wanted to be responsible for releasing the next Osama bin Laden." That incentive to do nothing hasn't changed. It might have even gotten stronger, as these innocents languish in prison.
In all these cases, the best way to change the trade-off is to change the incentives. Look at why the Whole Foods case works. Store employees don't have to apprehend shoplifters, because society created a special organization specifically authorized to lay hands on people the grocery store points to as shoplifters: the police. If we want more rationality out of the TSA, there needs to be someone with a broader perspective willing to deal with general threats rather than specific targets or tactics.
For prisoners, society has created a special organization specifically entrusted with the role of judging the evidence against them and releasing them if appropriate: the judiciary. It's only because the George W. Bush administration decided to remove the Guantanamo prisoners from the legal system that we are now stuck with these perverse incentives. Our country would be smart to move as many of these people through the court system as we can.
This essay originally appeared on Wired.com.http://www.wired.com/politics/security/commentary/securitymatters/2009/02/securitymatters_0226 or http://tinyurl.com/aku6bf
Whole Foods incident:http://www.mlive.com/news/index.ssf/2007/12/grocery_worker_fired_for_stopp.html or http://tinyurl.com/3dma49
King County ID checks: http://www.kingcounty.gov/safety/sheriff/Services/Gun.aspx Terrorists as liquid bombers: http://www.schneier.com/blog/archives/2007/08/details_on_the_1.html CYA security: http://www.schneier.com/blog/archives/2007/02/cya_security_1.html The perverse incentives of holding terrorist suspects in custody:http://query.nytimes.com/gst/fullpage.html?res=9C00E3DF133EF934A15756C0A9629C8B63&sec=&spon=&pagewanted=all or http://tinyurl.com/cgh86n
** *** ***** ******* *********** ************* Privacy in the Age of Persistence(Note: This isn't the first time I have written about this topic, and it surely won't be the last. I think I did a particularly good job summarizing the issues this time, which is why I am reprinting it.)
Welcome to the future, where everything about you is saved. A future where your actions are recorded, your movements are tracked, and your conversations are no longer ephemeral. A future brought to you not by some 1984-like dystopia, but by the natural tendencies of computers to produce data.
Data is the pollution of the information age. It's a natural byproduct of every computer-mediated interaction. It stays around forever, unless it's disposed of. It is valuable when reused, but it must be done carefully. Otherwise, its after effects are toxic.
And just as 100 years ago people ignored pollution in our rush to build the Industrial Age, today we're ignoring data in our rush to build the Information Age.
Increasingly, you leave a trail of digital footprints throughout your day. Once you walked into a bookstore and bought a book with cash. Now you visit Amazon, and all of your browsing and purchases are recorded. You used to buy a train ticket with coins; now your electronic fare card is tied to your bank account. Your store affinity cards give you discounts; merchants use the data on them to reveal detailed purchasing patterns.
Data about you is collected when you make a phone call, send an e-mail message, use a credit card, or visit a website. A national ID card will only exacerbate this.
More computerized systems are watching you. Cameras are ubiquitous in some cities, and eventually face recognition technology will be able to identify individuals. Automatic license plate scanners track vehicles in parking lots and cities. Color printers, digital cameras, and some photocopy machines have embedded identification codes. Aerial surveillance is used by cities to find building permit violators and by marketers to learn about home and garden size.
As RFID chips become more common, they'll be tracked, too. Already you can be followed by your cell phone, even if you never make a call. This is wholesale surveillance; not "follow that car," but "follow every car."
Computers are mediating conversation as well. Face-to-face conversations are ephemeral. Years ago, telephone companies might have known who you called and how long you talked, but not what you said. Today you chat in e-mail, by text message, and on social networking sites. You blog and you Twitter. These conversations -- with family, friends, and colleagues -- can be recorded and stored.
It used to be too expensive to save this data, but computer memory is now cheaper. Computer processing power is cheaper, too; more data is cross-indexed and correlated, and then used for secondary purposes. What was once ephemeral is now permanent.
Who collects and uses this data depends on local laws. In the US, corporations collect, then buy and sell, much of this information for marketing purposes. In Europe, governments collect more of it than corporations. On both continents, law enforcement wants access to as much of it as possible for both investigation and data mining.
Regardless of country, more organizations are collecting, storing, and sharing more of it.
More is coming. Keyboard logging programs and devices can already record everything you type; recording everything you say on your cell phone is only a few years away.
A "life recorder" you can clip to your lapel that'll record everything you see and hear isn't far behind. It'll be sold as a security device, so that no one can attack you without being recorded. When that happens, will not wearing a life recorder be used as evidence that someone is up to no good, just as prosecutors today use the fact that someone left his cell phone at home as evidence that he didn't want to be tracked?
You're living in a unique time in history: the technology is here, but it's not yet seamless. Identification checks are common, but you still have to show your ID. Soon it'll happen automatically, either by remotely querying a chip in your wallets or by recognizing your face on camera.
And all those cameras, now visible, will shrink to the point where you won't even see them. Ephemeral conversation will all but disappear, and you'll think it normal. Already your children live much more of their lives in public than you do. Your future has no privacy, not because of some police-state governmental tendencies or corporate malfeasance, but because computers naturally produce data.
Cardinal Richelieu famously said: "If one would give me six lines written by the hand of the most honest man, I would find something in them to have him hanged." When all your words and actions can be saved for later examination, different rules have to apply.
Society works precisely because conversation is ephemeral; because people forget, and because people don't have to justify every word they utter.
Conversation is not the same thing as correspondence. Words uttered in haste over morning coffee, whether spoken in a coffee shop or thumbed on a BlackBerry, are not official correspondence. A data pattern indicating "terrorist tendencies" is no substitute for a real investigation. Being constantly scrutinized undermines our social norms; furthermore, it's creepy. Privacy isn't just about having something to hide; it's a basic right that has enormous value to democracy, liberty, and our humanity.
We're not going to stop the march of technology, just as we cannot un-invent the automobile or the coal furnace. We spent the industrial age relying on fossil fuels that polluted our air and transformed our climate. Now we are working to address the consequences. (While still using said fossil fuels, of course.) This time around, maybe we can be a little more proactive.
Just as we look back at the beginning of the previous century and shake our heads at how people could ignore the pollution they caused, future generations will look back at us -- living in the early decades of the information age -- and judge our solutions to the proliferation of data.
We must, all of us together, start discussing this major societal change and what it means. And we must work out a way to create a future that our grandchildren will be proud of.
This essay originally appeared on the BBC.com website. http://news.bbc.co.uk/1/hi/technology/7897892.stm National ID cards: http://www.schneier.com/essay-160.html Surveillance cameras: http://www.schneier.com/essay-225.html RFID chips: http://epic.org/privacy/rfid/ Cell phone surveillance:http://computerworld.com/action/article.do?command=viewArticleBasic&articleId=9127462 or http://tinyurl.com/au2f4n
Wholesale surveillance: http://www.schneier.com/essay-147.html Data mining: http://www.schneier.com/essay-108.html The future of surveillance: http://www.schneier.com/essay-109.html Face recognition: http://epic.org/privacy/facerecognition/ Privacy and the younger generation: http://nymag.com/news/features/27341/ Ill effects of constant surveillance: http://news.bbc.co.uk/1/hi/uk_politics/7872425.stm The value of privacy: http://www.schneier.com/essay-114.html ** *** ***** ******* *********** ************* NewsUni-ball is using fear to sell its hard-to-erase pen -- but it's the wrong fear. They're confusing check-washing fraud, where someone takes a check and changes the payee and maybe the amount, with identity theft. And how can someone steal money from me by erasing and changing information on a tax form? Are they going to cause my refund check to be sent to another address? This is getting awfully Byzantine. http://videogum.com/archives/commercials/s-epatha-merkerson-will-terrif_045001.html or http://tinyurl.com/7jcful
http://www.schneier.com/blog/archives/2007/09/using_fear_to_s.htmlLos Alamos has lost 80 computers: no idea if they're stolen, or just misplaced. Typical story -- not even worth commenting on -- but this great comment explains a lot about what was wrong with their security policy: "The letter, addressed to Department of Energy security officials, contends that 'cyber security issues were not engaged in a timely manner' because the computer losses were treated as a 'property management issue.'" The real risk in computer losses is the data, not the hardware. I thought everyone knew that. http://www.google.com/hostednews/afp/article/ALeqM5jXipyrzU1GKO4KQ3f4hhKyLvJvTA or http://tinyurl.com/d7oxy5
Difficult-to-pronounce things are judged to be more risky than easy-to-pronounce things:
http://www.ncbi.nlm.nih.gov/pubmed/19170941New paper: "WiFi networks and malware epidemiology," by Hao Hu, Steven Myers, Vittoria Colizza, and Alessandro Vespignani. Honestly, I'm not sure I understood most of the article. And I don't think that their model is all that great. But I like to see these sorts of methods applied to malware and infection rates.
http://www.pnas.org/content/early/2009/01/26/0811973106 http://arxiv.org/abs/0706.3146 HIPAA accountability in U.S. stimulus bill: http://www.schneier.com/blog/archives/2009/02/hipaa_accountab.html Terrorism common sense from MI6:http://www.theregister.co.uk/2009/02/11/mi6_spy_rubbishes_terrorism_fear/ or http://tinyurl.com/cxfl8s
Here's an analysis of 30,000 passwords from phpbb.com. http://www.darkreading.com/blog/archives/2009/02/phpbb_password.html It's similar to my analysis of 34,000 MySpace passwords. http://www.schneier.com/blog/archives/2006/12/realworld_passw.htmlSeems like we still can't choose good passwords. Conficker.B exploits this, trying about 200 common passwords to help spread itself.
http://www.sophos.com/blogs/gc/g/2009/01/16/passwords-conficker-worm/ Blog entry: http://www.schneier.com/blog/archives/2009/02/another_passwor.htmlEvidence of the effectiveness of the "broken windows" theory of crime fighting: http://www.boston.com/news/local/massachusetts/articles/2009/02/08/breakthrough_on_broken_windows/ or http://tinyurl.com/cslqo5
http://www.ncjrs.gov/App/publications/Abstract.aspx?id=246202 The NSA wants help eavesdropping on Skype:http://www.theregister.co.uk/2009/02/12/nsa_offers_billions_for_skype_pwnage/ or http://tinyurl.com/a9hn2n I'm sure this is a real problem. Here's an article claiming that Italian criminals are using Skype more than the telephone because of eavesdropping concerns.
http://www.theregister.co.uk/2009/02/16/italian_crooks_skype/A study from New Jersey shows that Megan's Law -- laws designed to identity sex offenders to the communities they live in -- is ineffective in reducing sex crimes or deterring recidivists. http://www.nj.com/news/index.ssf/2009/02/study_finds_megans_law_fails_t_1.html or http://tinyurl.com/b2mql2
Another Conficker variant: Conficker B++. This is one well-designed piece of malware.
http://www.schneier.com/blog/archives/2009/02/new_conficker_v.htmlPresident Obama has tasked Melissa Hathaway with conducting a 60-day review of the nation's cybersecurity policies. http://www.usatoday.com/tech/2009-02-16-cybersecurity-expert-obama_N.htm or http://tinyurl.com/cx3kon http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9127682&intsrc=news_ts_head or http://tinyurl.com/d2ygpp This interview, conducted last year, will give you a good idea of how she thinks. http://www2.computer.org/portal/web/computingnow/1208/whatsnew/securityandprivacy or http://tinyurl.com/by28l7
Maine man tries to build a dirty bomb and no one cares, probably because he isn't Muslim. White supremacist terrorism just isn't sexy these days.
http://jonathanstray.com/maine-man-tries-to-build-dirty-bomb There are rumors of prototype electromagnetic pulse grenades: http://www.theregister.co.uk/2009/02/12/electropulse_grenades/TrapCall is a new service that reveals the caller ID on anonymous or blocked calls.
http://blog.wired.com/27bstroke6/2009/02/trapcall.html Judge orders defendant to decrypt laptop: interesting Fifth Amendment case. http://news.cnet.com/8301-13578_3-10172866-38.htmlUse this shower mirror with a hidden camera to catch the lovers of cheating spouses:
http://www.dpl-surveillance-equipment.com/100611.html The site has a wide variety of hidden cameras in common household objects. http://www.dpl-surveillance-equipment.com/wireless_hidden_cameras.htmlUniversity of Miami law professor Michael Froomkin writes about ID cards and society in "Identity Cards and Identity Romanticism."
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1309222 http://www.schneier.com/blog/archives/2009/03/michael_froomki.htmlThis commentary on the UK government national security strategy is scary: "Sir David Omand, the former Whitehall security and intelligence co-ordinator, sets out a blueprint for the way the state will mine data -- including travel information, phone records and emails -- held by public and private bodies and admits: 'Finding out other people's secrets is going to involve breaking everyday moral rules.'" In short: it's immoral, but we're going to do it anyway. http://www.guardian.co.uk/uk/2009/feb/25/personal-data-terrorism-surveillance or http://tinyurl.com/c5ll6r
Programs "staple" and "unstaple" perform all-or-nothing encryption. Just demonstration code, but interesting all the same.
http://sysnet.ucsd.edu/projects/staple/Interesting paper: "Optimised to Fail: Card Readers for Online Banking," by Saar Drimer, Steven J. Murdoch, and Ross Anderson.
http://www.cl.cam.ac.uk/~sjm217/papers/fc09optimised.pdfhttp://www.lightbluetouchpaper.org/2009/02/26/optimised-to-fail-card-readers-for-online-banking/ or http://tinyurl.com/bdnafk
I'm sure you need some skill to actually use this self-defense pen, and I'm also sure it'll get through airport security checkpoints just fine.
http://www.botachtactical.com/kzxtremepen.htmlThis article gives an overview of U.S. military robots, and discusses some of the issues regarding the ethics of their use in war. http://www.thenewatlantis.com/publications/military-robots-and-the-laws-of-war or http://tinyurl.com/csoj98 The article was adapted from his book Wired for War: The Robotics Revolution and Conflict in the 21st Century, published this year. I bought the book, but I have not read it yet. Related is this paper on the ethics of autonomous military robots.
http://www.schneier.com/blog/archives/2008/01/ethics_of_auton.html Blog entry: http://www.schneier.com/blog/archives/2009/03/history_and_eth.htmlSecret NATO documents about the war in Afghanistan leaked due to bad password:
https://secure.wikileaks.org/wiki/N1 Security theater scare mongering, in hotels and churches: http://news.bbc.co.uk/1/hi/england/london/7933004.stm http://www.cnn.com/2009/CRIME/03/09/church.security/index.html http://www.schneier.com/blog/archives/2009/03/security_theate_2.htmlFascinating history of the techniques used to distribute child porn throughout the world:
http://wikileaks.org/wiki/My_life_in_child_pornhttp://www.schneier.com/blog/archives/2009/03/the_techniques.html#c356628 or http://tinyurl.com/asnc63
Google Maps spam: http://www.schneier.com/blog/archives/2009/03/google_map_spam.html This story of the world's largest diamond heist reads like a movie plot:http://www.wired.com/politics/law/magazine/17-04/ff_diamonds?currentPage=all or http://tinyurl.com/ak8hrx
Many Sentex keypads, which are used to secure doors everywhere, can be opened with a default admin password:
http://www.schneier.com/blog/archives/2009/03/the_doghouse_se_1.html ** *** ***** ******* *********** ************* InsidersRajendrasinh Makwana was a UNIX contractor for Fannie Mae. On October 24, he was fired. Before he left, he slipped a logic bomb into the organization's network. The bomb would have "detonated" on January 31. It was programmed to disable access to the server on which it was running, block any network monitoring software, systematically and irretrievably erase everything -- and then replicate itself on all 4,000 Fannie Mae servers. Court papers claim the damage would have been in the millions of dollars, a number that seems low. Fannie Mae would have been shut down for at least a week.
Luckily -- and it does seem it was pure luck -- another programmer discovered the script a week later, and disabled it.
Insiders are a perennial problem. They have access, and they're known by the system. They know how the system and its security works, and its weak points. They have opportunity. Bank heists, casino thefts, large-scale corporate fraud, train robberies: many of the most impressive criminal attacks involve insiders. And, like Makwana's attempt at revenge, these insiders can have pretty intense motives -- motives that can only intensify as the economy continues to suffer and layoffs increase.
Insiders are especially pernicious attackers because they're trusted. They have access because they're *supposed* to have access. They have opportunity, and an understanding of the system, because they use it -- or they designed, built, or installed it. They're already inside the security system, making them much harder to defend against.
It's not possible to design a system without trusted people. They're everywhere. In offices, employees are trusted people given access to facilities and resources, and allowed to act -- sometimes broadly, sometimes narrowly -- in the company's name. In stores, employees are allowed access to the back room and the cash register; and customers are trusted to walk into the store and touch the merchandise. IRS employees are trusted with personal tax information; hospital employees are trusted with personal health information. Banks, airports, and prisons couldn't operate without trusted people.
Replacing trusted people with computers doesn't make the problem go away; it just moves it around and makes it even more complex. The computer, software, and network designers, implementers, coders, installers, maintainers, etc. are all trusted people. See any analysis of the security of electronic voting machines, or some of the frauds perpetrated against computerized gambling machines, for some graphic examples of the risks inherent in replacing people with computers.
Of course, this problem is much, much older than computers. And the solutions haven't changed much throughout history, either. There are five basic techniques to deal with trusted people:
1. Limit the number of trusted people. This one is obvious. The fewer people who have root access to the computer system, know the combination to the safe, or have the authority to sign checks, the more secure the system is.
2. Ensure that trusted people are also trustworthy. This is the idea behind background checks, lie detector tests, personality profiling, prohibiting convicted felons from getting certain jobs, limiting other jobs to citizens, the TSA's no-fly list, and so on, as well as behind bonding employees, which means there are deep pockets standing behind them if they turn out not to be trustworthy.
3. Limit the amount of trust each person has. This is compartmentalization; the idea here is to limit the amount of damage a person can do if he ends up not being trustworthy. This is the concept behind giving people keys that only unlock their office or passwords that only unlock their account, as well as "need to know" and other levels of security clearance.
4. Give people overlapping spheres of trust. This is what security professionals call defense in depth. It's why it takes two people with two separate keys to launch nuclear missiles, and two signatures on corporate checks over a certain value. It's the idea behind bank tellers requiring management overrides for high-value transactions, double-entry bookkeeping, and all those guards and cameras at casinos. It's why, when you go to a movie theater, one person sells you a ticket and another person standing a few yards away tears it in half: It makes it much harder for one employee to defraud the system. It's why key bank employees need to take their two-week vacations all at once -- so their replacements have a chance to uncover any fraud.
5. Detect breaches of trust after the fact and prosecute the guilty. In the end, the four previous techniques can only do so well. Trusted people can subvert a system. Most of the time, we discover the security breach after the fact and then punish the perpetrator through the legal system: publicly, so as to provide a deterrence effect and increase the overall level of security in society. This is why audit is so vital.
These security techniques don't only protect against fraud or sabotage; they protect against the more common problem: mistakes. Trusted people aren't perfect; they can inadvertently cause damage. They can make a mistake, or they can be tricked into making a mistake through social engineering.
Good security systems use multiple measures, all working together. Fannie Mae certainly limits the number of people who have the ability to slip malicious scripts into their computer systems, and certainly limits the access that most of these people have. It probably has a hiring process that makes it less likely that malicious people come to work at Fannie Mae. It obviously doesn't have an audit process by which a change one person makes on the servers is checked by someone else; I'm sure that would be prohibitively expensive. Certainly the company's IT department should have terminated Makwana's network access as soon as he was fired, and not at the end of the day.
In the end, systems will always have trusted people who can subvert them. It's important to keep in mind that incidents like this don't happen very often; that most people are honest and honorable. Security is very much designed to protect against the dishonest minority. And often little things -- like disabling access immediately upon termination -- can go a long way.
This essay originally appeared on the Wall Street Journal website. http://online.wsj.com/article/SB123447990459779609.html Makwana: http://blogs.zdnet.com/BTL/?p=11905 http://www.theregister.co.uk/2009/01/29/fannie_mae_sabotage_averted/ http://blog.wired.com/27bstroke6/2009/01/fannie.html Economic downturn increases insider threat: http://news.bbc.co.uk/1/hi/technology/7875904.stm Hospital employees illegally accessing patient data: http://www.schneier.com/blog/archives/2007/10/27_suspended_fo.html Insecurity in electronic voting machines: http://www.schneier.com/blog/archives/2006/11/voting_technolo.html http://www.nytimes.com/2008/01/06/magazine/06Vote-t.html http://www.schneier.com/essay-101.htmlhttp://freedom-to-tinker.com/blog/dwallach/vendor-misinformation-e-voting-world or http://tinyurl.com/5c7kxn
http://www.schneier.com/blog/archives/2008/08/diebold_finally.html http://blog.wired.com/27bstroke6/2009/01/diebold-audit-l.html http://www.schneier.com/essay-068.html http://www.crypto.com/blog/ohio_voting/http://www.huffingtonpost.com/kirsten-anderson/an-interview-with-david-w_b_64063.html or http://tinyurl.com/ad6rn3
Computerized gambling machine fraud:http://www.reviewjournal.com/lvrj_home/1998/Jan-10-Sat-1998/news/6745681.html or http://tinyurl.com/xswg
Replacing people with computers: http://www.schneier.com/blog/archives/2008/12/comparing_the_s.html Audit: http://www.schneier.com/blog/archives/2008/12/audit.html ** *** ***** ******* *********** ************* The Doghouse: Singularics This is priceless:"Our advances in Prime Number Theory have led to a new branch of mathematics called Neutronics. Neutronic functions make possible for the first time the ability to analyze regions of mathematics commonly thought to be undefined, such as the point where one is divided by zero. In short, we have developed a new way to analyze the undefined point at the singularity which appears throughout higher mathematics.
"This new analytic technique has given us profound insight into the way that prime numbers are distributed throughout the integers. According to RSA's website, there are over 1 billion licensed instances of RSA public-key encryption in use in the world today. Each of these instances of the prime number based RSA algorithm can now be deciphered using Neutronic analysis. Unlike RSA, Neutronic Encryption is not based on two large prime numbers but rather on the Neutronic forces that govern the distribution of the primes themselves. The encryption that results from Singularic's Neutronic public-key algorithm is theoretically impossible to break."
You'd think that anyone who claims to be able to decrypt RSA at the key lengths in use today would, maybe, um, demonstrate that at least once. Otherwise, this can all be safely ignored as snake oil.
The founder and CTO also claims to have proved the Riemann Hypothesis, if you care to wade through the 63-page paper.
http://www.singularics.com/products/encryption/ Snake oil: http://www.schneier.com/crypto-gram-9902.html#snakeoil Riemann Hypothesis "proof":http://www.singularics.com/science/mathematics/OnNeutronicFunctions.pdf or http://tinyurl.com/agmoy9
** *** ***** ******* *********** ************* Three Security Anecdotes from the Insect WorldBeet armyworm caterpillars react to the sound of a passing wasp by freezing in place, or even dropping off the plant. Unfortunately, armyworm intelligence isn't good enough to tell the difference between enemy aircraft (the wasps that prey on them) and harmless commercial flights (bees); they react the same way to either. So by producing nectar for bees, plants not only get pollinated, but also gain some protection against being eaten by caterpillars.
The small hive beetle lives by entering beehives to steal combs and honey. They home in on the hives by detecting the bees' own alarm pheromones. They also track in yeast that ferments the pollen and releases chemicals that spoof the alarm pheromones, attracting more beetles and more yeast. Eventually the bees abandon the hive, leaving the beetles and yeast to finish off the pollen and honey.
Mountain alcon blue caterpillars get ants to feed them by spoofing a biometric: the sounds made by the queen ant.
http://scienceblogs.com/notrocketscience/2008/12/buzzing_bees_scare_caterpillars_away_from_plants.php or http://tinyurl.com/b2fp7m
http://scienceblogs.com/notrocketscience/2009/01/beetle_and_yeast_team_up_against_bees.php or http://tinyurl.com/96kdea
http://scienceblogs.com/notrocketscience/2009/02/butterflies_scrounge_off_ants_by_mimicking_the_music_of_quee.php or http://tinyurl.com/cxu8cm
** *** ***** ******* *********** ************* The Kindness of StrangersWhen I was growing up, children were commonly taught: "don't talk to strangers." Strangers might be bad, we were told, so it's prudent to steer clear of them.
And yet most people are honest, kind, and generous, especially when someone asks them for help. If a small child is in trouble, the smartest thing he can do is find a nice-looking stranger and talk to him.
These two pieces of advice may seem to contradict each other, but they don't. The difference is that in the second instance, the child is choosing which stranger to talk to. Given that the overwhelming majority of people will help, the child is likely to get help if he chooses a random stranger. But if a stranger comes up to a child and talks to him or her, it's not a random choice. It's more likely, although still unlikely, that the stranger is up to no good.
As a species, we tend help each other, and a surprising amount of our security and safety comes from the kindness of strangers. During disasters: floods, earthquakes, hurricanes, bridge collapses. In times of personal tragedy. And even in normal times.
If you're sitting in a café working on your laptop and need to get up for a minute, ask the person sitting next to you to watch your stuff. He's very unlikely to steal anything. Or, if you're nervous about that, ask the three people sitting around you. Those three people don't know each other, and will not only watch your stuff, but they'll also watch each other to make sure no one steals anything.
Again, this works because you're selecting the people. If three people walk up to you in the cafe and offer to watch your computer while you go to the bathroom, don't take them up on that offer. Your odds of getting three honest people are much lower.
Some computer systems rely on the kindness of strangers, too. The Internet works because nodes benevolently forward packets to each other without any recompense from either the sender or receiver of those packets. Wikipedia works because strangers are willing to write for, and edit, an encyclopedia with no recompense.
Collaborative spam filtering is another example. Basically, once someone notices a particular e-mail is spam, he marks it, and everyone else in the network is alerted that it's spam. Marking the e-mail is a completely altruistic task; the person doing it gets no benefit from the action. But he receives benefit from everyone else doing it for other e-mails.
Tor is a system for anonymous Web browsing. The details are complicated, but basically, a network of Tor servers passes Web traffic among each other in such a way as to anonymize where it came from. Think of it as a giant shell game. As a Web surfer, I put my Web query inside a shell and send it to a random Tor server. That server knows who I am but not what I am doing. It passes that shell to another Tor server, which passes it to a third. That third server -- which knows what I am doing but not who I am -- processes the Web query. When the Web page comes back to that third server, the process reverses itself and I get my Web page. Assuming enough Web surfers are sending enough shells through the system, even someone eavesdropping on the entire network can't figure out what I'm doing.
It's a very clever system, and it protects a lot of people, including journalists, human rights activists, whistleblowers, and ordinary people living in repressive regimes around the world. But it only works because of the kindness of strangers. No one gets any benefit from being a Tor server; it uses up bandwidth to forward other people's packets around. It's more efficient to be a Tor client and use the forwarding capabilities of others. But if there are no Tor servers, then there's no Tor. Tor works because people are willing to set themselves up as servers, at no benefit to them.
Alibi clubs work along similar lines. You can find them on the Internet, and they're loose collections of people willing to help each other out with alibis. Sign up, and you're in. You can ask someone to pretend to be your doctor and call your boss. Or someone to pretend to be your boss and call your spouse. Or maybe someone to pretend to be your spouse and call your boss. Whatever you want, just ask and some anonymous stranger will come to your rescue. And because your accomplice is an anonymous stranger, it's safer than asking a friend to participate in your ruse.
There are risks in these sorts of systems. Regularly, marketers and other people with agendas try to manipulate Wikipedia entries to suit their interests. Intelligence agencies can, and almost certainly have, set themselves up as Tor servers to better eavesdrop on traffic. And a do-gooder could join an alibi club just to expose other members. But for the most part, strangers are willing to help each other, and systems that harvest this kindness work very well on the Internet.
This essay originally appeared on the Wall Street Journal website. http://online.wsj.com/article/SB123567809587886053.html Tor: http://www.torproject.org/torusers.html.en http://www.torproject.org Alibi clubs: http://www.nytimes.com/2004/06/26/technology/26ALIB.html?hp http://www.alibinetwork.com/index.jsp ** *** ***** ******* *********** ************* New eBay FraudHere's a clever fraud, exploiting relative delays in eBay, PayPal, and UPS shipping.
"The buyer reported the item as 'destroyed' and demanded and got a refund from Paypal. When the buyer shipped it back to Chad and he opened it, he found there was nothing wrong with it -- except that the scammer had removed the memory, processor and hard drive. Now Chad is out $500 and left with a shell of a computer, and since the item was 'received' Paypal won't do anything."
Very clever. The seller accepted the return from UPS after a visual inspection, so UPS considered the matter closed. PayPal and eBay both considered the matter closed. if the amount was large enough, the seller could sue, but how could he prove that the computer was functional when he sold it?
It seems to me that the only way to solve this is for PayPal to not process refunds until the seller confirms what he received back is the same as what he shipped. Yes, then the seller could commit similar fraud, but sellers (certainly professional ones) have a greater reputational risk.
http://consumerist.com/5159479/ebay-scammer-says-pc-destroyed-in-mail-takes-500-sends-back-destroyed-pc-minus-parts or http://tinyurl.com/czj2bu
** *** ***** ******* *********** ************* Schneier News Schneier is speaking at MinneWebCon on April 6 in Minneapolis. http://minnewebcon.umn.edu/Schneier is speaking at the 3rd Annual Asia-Pacific Programme for Senior National Security Officers (APPSNO) on April 14 in Singapore.
http://www.rsis.edu.sg/cens/events/upcoming_events.html ** *** ***** ******* *********** ************* IT Security: Blaming the VictimBlaming the victim is common in IT: users are to blame because they don't patch their systems, choose lousy passwords, fall for phishing attacks, and so on. But, while users are, and will continue to be, a major source of security problems, focusing on them is an unhelpful way to think.
People regularly don't do things they are supposed to: changing the oil in their cars, going to the dentist, replacing the batteries in their smoke detectors. Why? Because people learn from experience. If something is immediately harmful, e.g., touching a hot stove or petting a live tiger, they quickly learn not to do it. But if someone skips an oil change, ignores a computer patch, or chooses a lousy password, it's unlikely to matter. No feedback, no learning.
We've tried to solve this in several ways. We give people rules of thumb: oil change every 5,000 miles; secure password guidelines. Or we send notifications: smoke alarms beep at us, dentists send postcards, Google warns us if we are about to visit a website suspected of hosting malware. But, again, the effects of ignoring these aren't generally felt immediately.
This makes security primarily a hindrance to the user. It's a recurring obstacle: something that interferes with the seamless performance of the user's task. And it's human nature, wired into our reasoning skills, to remove recurring obstacles. So, if the consequences of bypassing security aren't obvious, then people will naturally do it.
This is the problem with Microsoft's User Account Control (UAC). Introduced in Vista, the idea is to improve security by limiting the privileges applications have when they're running. But the security prompts pop up too frequently, and there's rarely any ill-effect from ignoring them. So people do ignore them.
This doesn't mean user education is worthless. On the contrary, user education is an important part of any corporate security program. And at home, the more users understand security threats and hacker tactics, the more secure their systems are likely to be. But we should also recognise the limitations of education.
The solution is to better design security systems that assume uneducated users: to prevent them from changing security settings that would leave them exposed to undue risk, or even better to take security out of their hands entirely.
For example, we all know that backups are a good thing. But if you forget to do a backup this week, nothing terrible happens. In fact, nothing terrible happens for years on end when you forget. So, despite what you know, you start believing that backups aren't really that important. Apple got the solution right with its backup utility Time Machine. Install it, plug in an external hard drive, and you are automatically backed up against hardware failure and human error. It's easier to use it than not.
For its part, Microsoft has made great strides in securing its operating system, providing default security settings in Windows XP and even more in Windows Vista to ensure that, when a naive user plugs a computer in, it's not defenceless.
Unfortunately, blaming the user can be good business. Mobile phone companies save money if they can bill their customers when a calling card number is stolen and used fraudulently. British banks save money by blaming users when they are victims of chip-and-pin fraud. This is continuing, with some banks going so far as to accuse the victim of perpetrating the fraud, despite evidence of large-scale fraud by organised crime syndicates.
The legal system needs to fix the business problems, but system designers need to work on the technical problems. They must accept that security systems that require the user to do the right thing are doomed to fail. And then they must design resilient security nevertheless.
This essay originally appeared in The Guardian. http://www.guardian.co.uk/technology/2009/mar/12/read-me-first Users are a problem:http://www.informationweek.com/news/security/client/showArticle.jhtml?articleID=213002007 or http://tinyurl.com/ab8pux http://www.informationweek.com/news/security/attacks/showArticle.jhtml?articleID=212700890 or http://tinyurl.com/b2s2ep
Lousy passwords: http://www.schneier.com/essay-144.html Choosing good passwords: http://www.schneier.com/essay-148.html Microsoft's UAC problems:http://arstechnica.com/security/news/2008/04/vistas-uac-security-prompt-was-designed-to-annoy-you.ars or http://tinyurl.com/cxazee
The limits of education: http://www.schneier.com/essay-139.html Blaming the user: http://www.schneier.com/blog/archives/2005/12/cell_phone_comp.html http://news.bbc.co.uk/1/hi/programmes/newsnight/7265437.stm Large-scale chip-and-pin fraud:http://www.telegraph.co.uk/news/newstopics/politics/lawandorder/3173346/Chip-and-pin-scam-has-netted-millions-from-British-shoppers.html or http://tinyurl.com/4xuk69
** *** ***** ******* *********** ************* Balancing Security and Usability in AuthenticationSince January, the Conficker.B worm has been spreading like wildfire across the Internet: infecting the French Navy, hospitals in Sheffield, the court system in Houston, and millions of computers worldwide. One of the ways it spreads is by cracking administrator passwords on networks. Which leads to the important question: Why in the world are IT administrators still using easy-to-guess passwords?
Computer authentication systems have two basic requirements. They need to keep the bad guys from accessing your account, and they need to allow you to access your account. Both are important, and every authentication system is a balancing act between the two. Too little security, and the bad guys will get in too easily. But if the authentication system is too complicated, restrictive, or hard to use, you won't be able to -- or won't bother to -- use it.
Passwords are the most common authentication system, and a good place to start. They're very easy to implement and use, which is why they're so popular. But as computers have become faster, password guessing has become easier. Most people don't choose passwords that are complicated enough to remain secure against modern password-guessing attacks. Conficker.B is even less clever; it just tries a list of about 200 common passwords.
To combat password guessing, many systems force users to choose harder-to-guess passwords -- requiring minimum lengths, non alpha-numeric characters, etc. -- and change their passwords more frequently. The first makes guessing harder, and the second makes a guessed password less valuable. This, of course, makes the system more annoying, so users respond by writing their passwords down and taping them to their monitors, or simply forgetting them more often. Smarter users write them down and put them in their wallets, or use a secure password database like Password Safe.
Users forgetting their passwords can be expensive -- sysadmins or customer service reps have to field phone calls and reset password -- so some systems include a backup authentication system: a secret question. The idea is that if you forget your password, you can authenticate yourself with some personal information that only you know. Your mother's maiden name was traditional, but these days there are all sorts of secret questions: your favourite schoolteacher, favourite colour, street you grew up on, name of your first pet, and so on. This might make the system more usable, but it also makes it much less secure: answers can be easily guessable, and are often known by people close to you.
A common enhancement is a one-time password generator, like a SecurID token. This is a small device with a screen that displays a password that changes automatically once a minute. Adding this is called two-factor authentication, and is much more secure, because this token -- "something you have" -- is combined with a password -- "something you know." But it's less usable, because the tokens have to be purchased and distributed to all users, and far too often it's "something you lost or forgot." And it costs money. Tokens are far more frequently used in corporate environments, but banks and some online gaming worlds have taken to using them -- sometimes only as an option, because people don't like them.
In most cases, how an authentication system works when a legitimate user tries to log on is much more important than how it works when an impostor tries to log on. No security system is perfect, and there is some level of fraud associated with any of these authentication methods. But the instances of fraud are rare compared to the number of times someone tries to log on legitimately. If a given authentication system let the bad guys in one in a hundred times, a bank could decide to live with the problem -- or try to solve it in some other way. But if the same authentication system prevented legitimate customers from logging on even one in a thousand times, the number of complaints would be enormous and the system wouldn't survive one week.
Balancing security and usability is hard, and many organizations get it wrong. But it's also evolving; organizations needing to tighten their security continue to push more involved authentication methods, and more savvy Internet users are willing to accept them. And certainly IT administrators need to be leading that evolutionary change.
A version of this essay was originally published in The Guardian.http://www.guardian.co.uk/technology/2009/feb/19/insecure-passwords-conflickerb-worm or http://tinyurl.com/awd5np
Conficker.B: http://www.crn.com/security/212902319http://www.telegraph.co.uk/news/worldnews/europe/france/4547649/French-fighter-planes-grounded-by-computer-virus.html or http://tinyurl.com/bbku57
http://www.smarthealthcare.com/sheffield-conficker http://www.theregister.co.uk/2009/02/09/houston_malware_infection/http://arstechnica.com/security/news/2009/01/conficker-worm-spikes-infects-1-1-million-pcs-in-24-hours.ars or http://tinyurl.com/dmvd8d http://securitywatch.eweek.com/virus_and_spyware/experts_-_conficker_usb_worm_spreading_quickly.html or http://tinyurl.com/bk5fs9 http://voices.washingtonpost.com/securityfix/2009/01/tricky_windows_worm_wallops_mi.html or http://tinyurl.com/8e8fbg http://bt.counterpane.com/Risk_Assessment_W32.Conficker_Worm_Update2.pdf or http://tinyurl.com/detvm5 http://www.microsoft.com/security/portal/Entry.aspx?Name=Worm:Win32/Conficker.B or http://tinyurl.com/9vpbxs
http://www.sophos.com/blogs/gc/g/2009/01/16/passwords-conficker-worm/ Guessing passwords: http://www.schneier.com/essay-246.html http://www.schneier.com/essay-148.html Password Safe: http://www.schneier.com/passsafe.html Security problems with secret questions: http://www.schneier.com/blog/archives/2005/02/the_curse_of_th.html ** *** ***** ******* *********** ************* Comments from ReadersThere are hundreds of comments -- many of them interesting -- on these topics on my blog. Search for the story you want to comment on, and join in.
http://www.schneier.com/blog ** *** ***** ******* *********** *************Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers "Schneier on Security," "Beyond Fear," "Secrets and Lies," and "Applied Cryptography," and an inventor of the Blowfish, Twofish, Phelix, and Skein algorithms. He is the Chief Security Technology Officer of BT BCSG, and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.
Copyright (c) 2009 by Bruce Schneier.