Companies and information: The leaky corporation | The Economist


The leaky corporation

IN EARLY February Hewlett-Packard showed off its new tablet computer, which it hopes will be a rival to Apple’s iPad. The event was less exciting than it might have been, thanks to the leaking of the design in mid-January. Other technology companies have suffered similar embarrassments lately. Dell’s timetable for bringing tablets to market appeared on a tech-news website. A schedule for new products from NVIDIA, which makes graphics chips, also seeped out.

Geeks aren’t the only ones who can’t keep a secret. In January it emerged that Renault had suspended three senior executives, allegedly for passing on blueprints for electric cars (which the executives deny). An American radio show has claimed to have found the recipe for Coca-Cola’s secret ingredient in an old newspaper photograph. Facebook’s corporate privacy settings went awry when some of the social network’s finances were published. A strategy document from AOL came to light, revealing that the internet and media firm’s journalists were expected to write five to ten articles a day.

Meanwhile, Julian Assange has been doing his best to make bankers sweat. In November the founder of WikiLeaks promised a “megaleak” early in 2011. He was said to be in possession of a hard drive from the laptop of a former executive of an unnamed American bank, containing documents even more toxic than the copiously leaked diplomatic cables from the State Department. They would reveal an “ecosystem of corruption” and “take down a bank or two”.

“I think it’s great,” Mr Assange said in a television interview in January. “We have all these banks squirming, thinking maybe it’s them.” At Bank of America (BofA), widely thought to be the bank in question, an internal investigation began. Had any laptop gone missing? What could be on its hard drive? And how should BofA react if, say, compromising e-mails were leaked?

The bank’s bosses and investigators can relax a bit. Recent reports say that Mr Assange has acknowledged in private that the material may be less revealing than he had suggested. Financial experts would be needed to determine whether any of it was at all newsworthy.

Even so, the WikiLeaks threat and the persistent leaking of other supposedly confidential corporate information have brought an important issue to the fore. Companies are creating an ever-growing pile of digital information, from product designs to employees’ e-mails. Keeping tabs on it all is increasingly hard, not only because there is so much of it but also because of the ease of storing and sending it. Much of this information would do little damage if it seeped into the outside world; some of it, indeed, might well do some good. But some could also be valuable to competitors—or simply embarrassing—and needs to be protected. Companies therefore have to decide what they should try to keep to themselves and how best to secure it.

Trying to prevent leaks by employees or to fight off hackers only helps so much. Powerful forces are pushing companies to become more transparent. Technology is turning the firm, long a safe box for information, into something more like a sieve, unable to contain all its data. Furthermore, transparency can bring huge benefits. “The end result will be more openness,” predicts Bruce Schneier, a data-security guru.

From safe to sieve

When corporate information lived only on paper, which was complemented by microfilm about 50 years ago, it was much easier to manage and protect than it is today. Accountants and archivists classified it; the most secret documents were put in a safe. Copying was difficult: it would have taken Bradley Manning, the soldier who is alleged to have sent the diplomatic cables to WikiLeaks, years to photograph or smuggle out all the 250,000 documents he is said to have downloaded—assuming that he was not detected.

Things did not change much when computers first made an appearance in firms. They were used mostly for accounting or other transactions, known as “structured information”. And they were self-contained systems to which few people had access. Even the introduction in the 1980s of more decentralised information-technology (IT) systems and personal computers (PCs) did not make much of a difference. PCs served at first as glorified typewriters.

It was only with the advent of the internet and its corporate counterpart, the intranet, that information began to flow more quickly. Employees had access to lots more data and could exchange electronic messages with the outer world. PCs became a receptacle for huge amounts of “unstructured information”, such as text files and presentations. The banker’s hard drive in Mr Assange’s possession is rumoured to contain several years’ worth of e-mails and attachments.

Now an even more important change is taking place. So far firms have spent their IT budgets mostly on what Geoffrey Moore of TCG Advisors, a firm of consultants, calls “systems of record”, which track the flow of money, products and people within a company and, more recently, its network of suppliers. Now, he says, firms are increasingly investing in “systems of engagement”. By this he means all kinds of technologies that digitise, speed up and automate a firm’s interaction with the outer world.

Mobile devices, video conferencing and online chat are the most obvious examples of these technologies: they allow instant communication. But they are only part of the picture, says Mr Moore. Equally important are a growing number of tools that enable new forms of collaboration: employees collectively edit online documents, called wikis; web-conferencing services help firms and their customers to design products together; and smartphone applications let companies collect information about people’s likes and dislikes and hence about market trends.

It is easy to see how such services will produce ever more data. They are one reason why IDC, a market-research firm, predicts that the “digital universe”, the amount of digital information created and replicated in a year, will increase to 35 zettabytes by 2020, from less than 1 zettabyte in 2009 (see chart); 1 zettabyte is 1 trillion gigabytes, or the equivalent of 250 billion DVDs. But these tools will also make a firm’s borders ever more porous. “WikiLeaks is just a reflection of the problem that more and more data are produced and can leak out,” says John Mancini, president of AIIM, an organisation dedicated to improving information management.

Two other developments are also poking holes in companies’ digital firewalls. One is outsourcing: contractors often need to be connected to their clients’ computer systems. The other is employees’ own gadgets. Younger staff, especially, who are attuned to easy-to-use consumer technology, want to bring their own gear to work. “They don’t like to use a boring corporate BlackBerry,” explains Mr Mancini.

The data drain

As a result, more and more data are seeping out of companies, even of the sort that should be well protected. When Eric Johnson of the Tuck School of Business at Dartmouth College and his fellow researchers went through popular file-sharing services last year, they found files that contained health-related information as well as names, addresses and dates of birth. In many cases, explains Mr Johnson, the reason for such leaks is not malice or even recklessness, but that corporate applications are often difficult to use, in particular in health care. To be able to work better with data, employees often transfer them into spreadsheets and other types of files that are easier to manipulate—but also easier to lose control of.

Although most leaks are not deliberate, many are. Renault, for example, claims to be a victim of industrial espionage. In a prominent insider-trading case in the United States, some hedge-fund managers are accused of having benefited from data leaked from Taiwanese semiconductor foundries, including spreadsheets showing the orders and thus the sales expectations of their customers.

Not surprisingly, therefore, companies feel a growing urge to prevent leaks. The pressure is regulatory as well as commercial. Stricter data-protection and other rules are also pushing firms to keep a closer watch on information. In America, for instance, the Health Insurance Portability and Accountability Act (HIPAA) introduced security standards for personal health data. In lawsuits companies must be able to produce all relevant digital information in court. No wonder that some executives have taken to using e-mail sparingly or not at all. Whole companies, however, cannot dodge the digital flow.

To help them plug the holes, companies are being offered special types of software. One is called “content management”. Programs sold by Alfresco, EMC Documentum and others let firms keep tabs on their digital content, classify it and define who has access to it. A junior salesman, for instance, will not be able to see the latest financial results before publication—and thus cannot send them to a friend.

Another type, in which Symantec and Websense are the market leaders, is “data loss prevention” (DLP). This is software that sits at the edge of a firm’s network and inspects the outgoing data traffic. If it detects sensitive information, it sounds the alarm and can block the incriminating bits. The software is often used to prevent social-security and credit-card numbers from leaving a company—and thus make it comply with HIPAA and similar regulations.

A third field, newer than the first two, is “network forensics”. The idea is to keep an eye on everything that is happening in a corporate network, and thus to detect a leaker. NetWitness, a start-up company, says that its software records all the digital goings-on and then looks for suspicious patterns, creating “real-time situation awareness”, in the words of Edward Schwartz, its chief security officer.

There are also any number of more exotic approaches. Autonomy, a British software firm, offers “bells in the dark”. False records—made-up pieces of e-mail, say—are spread around the network. Because they are false, no one should gain access to them. If somebody does, an alarm is triggered, as a burglar might set off an alarm breaking into a house at night.

These programs deter some leakers and keep employees from doing stupid things. But reality rarely matches the marketing. Content-management programs are hard to use and rarely fully implemented. Role-based access control sounds fine in theory but is difficult in practice. Firms often do not know exactly what access should be assigned to whom. Even if they do, jobs tend to change quickly. A field study of an investment bank by Mr Johnson and his colleagues found that one department of 3,000 employees saw 1,000 organisational changes within only a few months.

This leads to what Mr Johnson calls “over-entitlement”. So that workers can get their jobs done, they are given access to more information than they really need. At the investment bank, more than 50% were over-entitled. Because access is rarely revoked, over time employees gain the right to see more and more. In some companies, Mr Johnson was able to predict a worker’s length of employment from how much access he had. But he adds that if role-based access control is enforced too strictly, employees have too little data to do their jobs.

Similarly, DLP is no guarantee against leaks: because it cannot tell what is in encrypted files, data can be wrapped up and smuggled out. Network forensics can certainly show what is happening in a small group of people working on a top-secret product. But it is hard to see how it can keep track of the ever-growing traffic that passes through or leaves big corporate IT systems, for instance through a simple memory stick (which plugs into a PC and can hold the equivalent of dozens of feature-length films). “Technology can’t solve the problem, just lower the probability of accidents,” explains John Stewart, the chief security officer of Cisco, a maker of networking equipment.

Other experts point out that companies face a fundamental difficulty. There is a tension in handling large amounts of data that can be seen by many people, argues Ross Anderson, of Cambridge University. If a system lets a few people do only very simple things—such as checking whether a product is available—the risks can be managed; but if it lets a lot of people do general inquiries it becomes insecure. SIPRNet, where the American diplomatic cables given to WikiLeaks had been stored, is a case in point: it provided generous access to several hundred thousand people.

In the corporate world, to limit the channels through which data can escape, some companies do not allow employees to bring their own gear to work or to use memory sticks or certain online services. Although firms have probably become more permissive since, a survey by Robert Half Technology, a recruitment agency, found in 2009 that more than half of chief information officers in America blocked the use of sites such as Facebook at work.

Yet this approach comes at a price, and not only because it makes a firm less attractive to Facebook-using, iPhone-toting youngsters. “More openness also creates trust,” argues Jeff Jarvis, a new-media sage who is writing a book about the virtues of transparency, entitled “Public Parts”. Dell, he says, gained a lot of goodwill when it started talking openly about its products’ technical problems, such as exploding laptop batteries. “If you open the kimono, a lot of good things happen,” says Don Tapscott, a management consultant and author: it keeps the company honest, creates more loyalty among employees and lowers transaction costs with suppliers.

More important still, if the McKinsey Global Institute, the research arm of a consulting firm, has its numbers right, limiting the adoption of systems of engagement can hurt profits. In a recent survey it found that firms that made extensive use of social networks, wikis and so forth reaped important benefits, including faster decision-making and increased innovation.

How then to strike the right balance between secrecy and transparency? It may be useful to think of a computer network as being like a system of roads. Just like accidents, leaks are bound to happen and attempts to stop the traffic will fail, says Mr Schneier, the security expert. The best way to start reducing accidents may not be employing more technology but making sure that staff understand the rules of the road—and its dangers. Transferring files onto a home PC, for instance, can be a recipe for disaster. It may explain how health data have found their way onto file-sharing networks. If a member of the employee’s family has joined such a network, the data can be replicated on many other computers.

Don’t do that again

Companies also have to set the right incentives. To avoid the problems of role-based access control, Mr Johnson proposes a system akin to a speed trap: it allows users to gain access to more data easily, but records what they do and hands out penalties if they abuse the privilege. He reports that Intel, the world’s largest chipmaker, issues “speeding tickets” to employees who break its rules.

Mr Johnson is the first to admit that this approach is too risky for data that are very valuable or the release of which could cause a lot of damage. But most companies do not even realise what kind of information they have and how valuable or sensitive it is. “They are often trying to protect everything instead of concentrating on the important stuff,” reports John Newton, the chief technology officer of Alfresco.

The “WikiLeaks incident is an opportunity to improve information governance,” wrote Debra Logan, an analyst at Gartner, a research firm, and her colleagues in a recent note. A first step is to decide which data should be kept and for how long; many firms store too much, making leaks more likely. In a second round, says Ms Logan, companies must classify information according to how sensitive it is. “Only then can you have an intelligent discussion about what to protect and what to do when something gets leaked.”

Such an exercise could also be an occasion to develop what Mr Tapscott calls a “transparency strategy”: how closed or open an organisation wants to be. The answer depends on the business it is in. For companies such as Accenture, an IT consultancy and outsourcing firm, security is a priority from the top down because it is dealing with a lot of customer data, says Alastair MacWillson, who runs its security business. Employees must undergo security training regularly. As far as possible, software should control what leaves the company’s network. “If you try to do something with your BlackBerry or your laptop that you should not do,” explains Mr MacWillson, “the system will ask you: ‘Should you really be doing this?’”

At the other end of the scale is the Mozilla Foundation, which leads the development of Firefox, an open-source browser. Transparency is not just a natural inclination but a necessity, says Mitchell Baker, who chairs the foundation. If Mozilla kept its cards close to the chest, its global community of developers would not and could not help write the program. So it keeps secrets to a minimum: employees’ personal information, data that business partners do not want made public and security issues in its software. Everything else can be found somewhere on Mozilla’s many websites. And anyone can take part in its weekly conference calls.

Few companies will go that far. But many will move in this direction. The transparency strategy of Best Buy, an electronics retailer, is that its customers should know as much as its employees. Twitter tells its employees that they can tweet about anything, but that they should not do “stupid things”. In the digital era of exploding quantities of data that are increasingly hard to contain within companies’ systems, more companies are likely to become more transparent. Mr Tapscott and Richard Hunter, another technology savant, may not have been exaggerating much a decade ago, when they wrote books foreseeing “The Naked Corporation” and a “World Without Secrets”.



Several years ago I had the privilege of working with Steve Rosenbaum, author of Curation Nation. Back then Steve was already vested in the future of online curation and his grande conquête was playing out with, a realtime video curation network. At the time, he was also a staple at some of the tech industry’s most renown conferences sharing his vision for social, video, and curated content. As Steve was completing his new book, he asked if I would write the foreword. At the time I was finalizing the new version of Engage! and as a result, I couldn’t make his deadline. But nonetheless, I was inspired to write an honorary foreword that I’ve held onto to celebrate the official release of Steve’s new book.

I share this digital foreword with you here…

I always appreciate when a very complex and important subject is simplified to ease understanding. Curation is no exception. The truth is that for several years there were two kinds of people in social media, those who create content and those who consume it. Historically, creators were among the digital elite, the so-called digerati, as only a small percentage was actively dedicated to creating. But there’s a world wide web out there and everyday people consuming social content dramatically outnumber the digerati.

Forrester Research tracked how people adopt and use social technologies through its Technographics research. In 2010, Forrester observed that almost 70% of people using social media simply consumed content. They did not comment. They did not post. They kept the information they found online to themselves. My personal research would later show that if something was worthy of sharing, email served as the distribution of choice over any form of public sharing.

Creating original content, consistently over time, is daunting. While content platforms are designed to make publishing uncomplicated, these tools still required time, understanding and also a tremendous amount of energy spent on audience building. The sacrifice hardly warranted the return. After all, it’s very unrewarding to spend time creating something special over and over only to have it constantly debut to crickets. To join the ranks of the digerati or even that of content creator required a substantial investment, one that most could not afford.

As we weave our social graph, we inherently earn built-in audiences, namely the people we know, for the thoughts, experiences, and information we share. Sharing is now serving a form of everyday communication and in many ways a form a social currency. Now, social networks live and breathe based on what we share online. As our experiences serve as network lifelines, the simplicity of introducing social objects into the stream becomes paramount. Social objects spark interaction and bring streams to life and it is only getting easier for individuals to package, share and interact with these objects as their networks grow and evolve.

With the rise of Twitter and Facebook, we witnessed the emergence of new categories of publishing tools and corresponding networks. Today, microblogs and tumblelogs such as Posterous and Tumblr are gaining universality. Tumblr alone boasts over 6 million users and 1.5 billion monthly pageviews and it’s just the beginning. Unlike traditional platforms such as WordPress and Blogger, microblogs simplify the process of sharing experiences. But unlike Twitter and other forms of micromedia, microblogs have memory and they allow for the inclusion of expanded context. Like blogs, they serve as a repository for the brand “you” and all that you represent. These new tools build upon the foundation that houses the digerati, adding new rooms and levels for a new breed of content creator.

In my opinion, blogs will remain as the hallmark for deeper expression and demonstration of expertise and opinion. Microblogs will empower those with a voice to share their perspective with greater consistency without the emotional and time commitment required of blogging. Micromedia will serve as the bridges between the events, observations, and the social objects that bind us. And for those macro and micro bloggers, micromedia represents bridges to the content they create and the extended audiences they hope to reach. Additionally, micromedia represents the ability to surround social objects with interaction in real-time.

As social media pushes us further into the spotlight of living life in the public stream, we’re encouraged to share more of ourselves online and we’re rewarded for doing so. When we share something interesting, we earn reactions in the form of Tweets, ReTweets, comments, Likes, Favorites, shares and sometimes we’ll trigger micro and macro blog posts. The greatest reward for consistently sharing interesting content however is the request for new connections, friends, fans, and followers.

Social networks and the exchanges that take place after sharing a social object set the stage for a new genre of information commerce. These interactions essentially change how we communicate and connect with one another. They also change how we find and share information. And herein lies both the significance of social media and its opportunity.

Let me explain.

In social networks, the people that we are connected to create our social graph. But as we share, we are changing the dynamics for relationships. What was simply a digital reflection of the people we knew is now morphing into connected groups that also include people who share our interests.

As we share interesting content, we basically share a bit of ourselves, serving as an expression of what interests us. The new connections we earn as a result change the nature of our social graph. Now, social graphs are transforming into what I refer to as “nicheworks,” groups of networks within the greater network that represent various subjects and themes in addition to the friends and family we know.

While a social graph is defined by the individual connections one maintains in online networks, these nicheworks essentially create focused interest graphs that represent the network of individuals bound by expressed recurring public themes.

The evolution of social graphs into interest graphs sets the stage for a more efficient and connected series of networks that combine context and attention. Interest graphs link individuals to the people and information they most align with across focused themes. As a result, the social media landscape will undergo an interesting transformation as it forms the underpinning of information commerce and the 3C’s of social content – creation, curation, and consumption. While blogging typically resides in the upper echelons of the social media hierarchy, microblogging is extending the ability to create beyond the elite. Furthermore, a segment between microblogging and micromedia has opened up an entirely new class of content sharing. These new services further democratize the ability to publish relevant information through the interest graphs of interconnected nicheworks. While it’s not quite content creation, the role curation plays in information commerce is pivotal.

You might not even realize it, but you’ve already packed your bags and are well on your way to living in a curation nation.

Are you a content consumer or creator? Some might say both, but in reality, most serve a role between creation and consumption. Not quite creator and far from solely consuming content, social curators introduce a new and important role into the pyramid of Information Commerce.

The traditional definition of curator is someone who is the keeper of a museum or other collection. In social media, a curator is the keeper of their interest graphs. By discovering, organizing, and sharing relevant content from around the Web, curators invest in the integrity and vibrancy of their nicheworks and the relationships that define them. Information becomes currency and the ability to repackage something of interest as a compelling, consumable and also sharable social object is an art.  As a result, the social capital of a curators is earned through qualifying, filtering, and refining relevant content and how well objects spark engagement and learning.

Tools, networks and services that cater to the role of the curator are already gaining traction, with several already leading the way. Storify,,, Pearltrees, and represent an array of leading services amongst curators as they not only enable the repackaging and dissemination of information, they do so in captivating and engaging formats. Like social networks, these services also connect people to one another, but instead of creating social graphs, curation networks weave interest graphs. Rather than creating original content, curators discover relevant content and share it within their networks of relevance with added perspective. The stream of an interest graph is rich with context and narrative allowing anyone connected to learn and interact based on the subject matter that captivates them.

The art of curation also extends to traditional social networks such as Twitter and Facebook and through status updates any social network of choice. Curated content also serves as social objects that spark conversations and reactions, while also breathing new life and extending the reach of the original content – wherever it may reside.

Curators play an important role in the evolution of new media, the reach of material information, and the social nicheworks that unite as a result. Curators promote interaction, collaboration, as well as education around the topics that are important to them. As such, services that empower curators will fill the void between creation and consumption. Forrester’s estimation of consumers of social content will erode from 70% to much lower numbers as many ease into social networking through curation – sharing with others the content that captivates their attention.  The ease of doing so forever converts the static consumer into a productive curator or perhaps one day, a full fledged creator.

With creation and curation increasing the exchange of information commerce, we are moving new media toward the mainstream creating bridges between social and traditional media and the people who connect around related information. As such, what you discover and equally what you share creates an information economy rich with contextual and topical relevance linked through shared experiences.

Welcome to a curation nation, population…growing!

Connect with Brian Solis on Twitter, LinkedIn, Facebook

The New ENGAGE!: If you’re looking to FIND answers in social media and not short cuts, consider either the Deluxe or Paperback edition

Get The Conversation Prism:


  • Craig S

  • Russell

  • The Future of Media: Storify and the Curatorial Instinct | Blog

  • briansolis

  • briansolis

  • Russell

  • Oliver Starr

  • briansolis

  • Lewis LaLanne aka Nerd #2

  • Cheri Allbritton

  • Bruce M Campbell

  • How many atheists believe Joe Barton was right to apologize to BP for American bitterness at the oil spill? | Bracelets-Diamond Bracelets,Gold Bracelets,Silver Bracelets

  • briansolis

  • John-paker

  • 3C’s of Information Commerce | SGEye Eyes the World

  • La settimana DaDaistica di Twitter (30/04/2011 A.D.) « Gilda35

  • Tom George

  • Greg Golebiewski

  • Synthèse du 02.05.11 « maxgiraux

  • Andrew

  • Bruce M Campbell

  • Bruce M Campbell

  • Pawan Deshpande

  • The End of the Destination Web and the Revival of the Information Economy Brian Solis

  • From user centricity of the people in the urls to Icentered in the liquid web | iCentered

  • Karan Bavandi

  • Eldad Sotnick-Yogev

  • Eldad Sotnick-Yogev

  • Myrstad's Blog » Blog Archive » Content Curation – Growing Up and Coming of Age

  • Summer Seed Ideas: Curation, Participation, and Student PLEs « The Unquiet Librarian

  • » Great Piece by Brian Solis

blog comments powered by Disqus

Signal-to-noise ratio - Wikipedia, the free encyclopedia


From Wikipedia, the free encyclopedia

Signal-to-noise ratio (often abbreviated SNR or S/N) is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. It is defined as the ratio of signal power to the noise power. A ratio higher than 1:1 indicates more signal than noise. While SNR is commonly quoted for electrical signals, it can be applied to any form of signal (such as isotope levels in an ice core or biochemical signaling between cells).

Signal-to-noise ratio is sometimes used informally to refer to the ratio of useful information to false or irrelevant data in a conversation or exchange. For example, in online discussion forums and other online communities, off-topic posts and spam are regarded as "noise" that interferes with the "signal" of appropriate discussion.

Signal-to-noise ratio is defined as the power ratio between a signal (meaningful information) and the background noise (unwanted signal):

 \mathrm{SNR} = \frac{P_\mathrm{signal}}{P_\mathrm{noise}},

where P is average power. Both signal and noise power must be measured at the same or equivalent points in a system, and within the same system bandwidth. If the signal and the noise are measured across the same impedance, then the SNR can be obtained by calculating the square of the amplitude ratio:

 \mathrm{SNR} = \frac{P_\mathrm{signal}}{P_\mathrm{noise}} = \left ( \frac{A_\mathrm{signal}}{A_\mathrm{noise} } \right )^2,

where A is root mean square (RMS) amplitude (for example, RMS voltage). Because many signals have a very wide dynamic range, SNRs are often expressed using the logarithmic decibel scale. In decibels, the SNR is defined as

 \mathrm{SNR_{dB}} = 10 \log_{10} \left ( \frac{P_\mathrm{signal}}{P_\mathrm{noise}} \right ) = {P_\mathrm{signal,dB} - P_\mathrm{noise,dB}},

which may equivalently be written using amplitude ratios as

 \mathrm{SNR_{dB}} = 10 \log_{10} \left ( \frac{A_\mathrm{signal}}{A_\mathrm{noise}} \right )^2 = 20 \log_{10} \left ( \frac{A_\mathrm{signal}}{A_\mathrm{noise}} \right ).

The concepts of signal-to-noise ratio and dynamic range are closely related. Dynamic range measures the ratio between the strongest un-distorted signal on a channel and the minimum discernable signal, which for most purposes is the noise level. SNR measures the ratio between an arbitrary signal level (not necessarily the most powerful signal possible) and noise. Measuring signal-to-noise ratios requires the selection of a representative or reference signal. In audio engineering, the reference signal is usually a sine wave at a standardized nominal or alignment level, such as 1 kHz at +4 dBu (1.228 VRMS).

SNR is usually taken to indicate an average signal-to-noise ratio, as it is possible that (near) instantaneous signal-to-noise ratios will be considerably different. The concept can be understood as normalizing the noise level to 1 (0 dB) and measuring how far the signal 'stands out'.

[note 1]

An alternative definition of SNR is as the reciprocal of the coefficient of variation, i.e., the ratio of mean to standard deviation of a signal or measurement:[2][3]

 \mathrm{SNR} = \frac{\mu}{\sigma}

where μ is the signal mean or expected value and σ is the standard deviation of the noise, or an estimate thereof.[note 2] Notice that such an alternative definition is only useful for variables that are always positive (such as photon counts and luminance). Thus it is commonly used in image processing,[4][5][6][7] where the SNR of an image is usually calculated as the ratio of the mean pixel value to the standard deviation of the pixel values over a given neighborhood. Sometimes SNR is defined as the square of the alternative definition above.

The Rose criterion (named after Albert Rose) states that an SNR of at least 5 is needed to be able to distinguish image features at 100% certainty. An SNR less than 5 means less than 100% certainty in identifying image details.[8]

Yet another alternative, very specific and distinct definition of SNR is employed to characterize sensitivity of imaging systems; see signal to noise ratio (imaging).

Related measures are the "contrast ratio" and the "contrast-to-noise ratio".

Recording of the noise of a thermogravimetric analysis device that is poorly isolated from a mechanical point of view; the middle of the curve shows a lower noise, due to a lesser surrounding human activity at night.

All real measurements are disturbed by noise. This includes electronic noise, but can also include external events that affect the measured phenomenon — wind, vibrations, gravitational attraction of the moon, variations of temperature, variations of humidity, etc., depending on what is measured and of the sensitivity of the device. It is often possible to reduce the noise by controlling the environment. Otherwise, when the characteristics of the noise are known and are different from the signals, it is possible to filter it or to process the signal. When the signal is constant or periodic and the noise is random, it is possible to enhance the SNR by averaging the measurement.

When a measurement is digitised, the number of bits used to represent the measurement determines the maximum possible signal-to-noise ratio. This is because the minimum possible noise level is the error caused by the quantization of the signal, sometimes called Quantization noise. This noise level is non-linear and signal-dependent; different calculations exist for different signal models. Quantization noise is modeled as an analog error signal summed with the signal before quantization ("additive noise").

This theoretical maximum SNR assumes a perfect input signal. If the input signal is already noisy (as is usually the case), the signal's noise may be larger than the quantization noise. Real analog-to-digital converters also have other sources of noise that further decrease the SNR compared to the theoretical maximum from the idealized quantization noise, including the intentional addition of dither.

Although noise levels in a digital system can be expressed using SNR, it is more common to use Eb/No, the energy per bit per noise power spectral density.

The modulation error ratio (MER) is a measure of the SNR in a digitally modulated signal.

For n-bit integers with equal distance between quantization levels (uniform quantization) the dynamic range (DR) is also determined.

Assuming a uniform distribution of input signal values, the quantization noise is a uniformly-distributed random signal with a peak-to-peak amplitude of one quantization level, making the amplitude ratio 2n/1. The formula is then:

 \mathrm{DR_{dB}} = \mathrm{SNR_{dB}} = 20 \log_{10}(2^n) \approx 6.02 \cdot n

This relationship is the origin of statements like "16-bit audio has a dynamic range of 96 dB". Each extra quantization bit increases the dynamic range by roughly 6 dB.

Assuming a full-scale sine wave signal (that is, the quantizer is designed such that it has the same minimum and maximum values as the input signal), the quantization noise approximates a sawtooth wave with peak-to-peak amplitude of one quantization level[9] and uniform distribution. In this case, the SNR is approximately

 \mathrm{SNR_{dB}} \approx 20 \log_{10} (2^n \sqrt {3/2}) \approx 6.02 \cdot n + 1.761

Floating-point numbers provide a way to trade off signal-to-noise ratio for an increase in dynamic range. For n bit floating-point numbers, with n-m bits in the mantissa and m bits in the exponent:

 \mathrm{DR_{dB}} = 6.02 \cdot 2^m
 \mathrm{SNR_{dB}} = 6.02 \cdot (n-m)

Note that the dynamic range is much larger than fixed-point, but at a cost of a worse signal-to-noise ratio. This makes floating-point preferable in situations where the dynamic range is large or unpredictable. Fixed-point's simpler implementations can be used with no signal quality disadvantage in systems where dynamic range is less than 6.02m. The very large dynamic range of floating-point can be a disadvantage, since it requires more forethought in designing algorithms.[10]

[note 3][note 4]

Optical signals have a carrier frequency, which is much higher than the modulation frequency (about 200 THz and more). This way the noise bandwidth covers a bandwidth which is much wider than the signal itself. The resulting signal influence relies mainly on the filtering of the noise. To describe the signal quality without taking the receiver into account the optical SNR (OSNR) is used. The OSNR is the ratio between the signal power and the noise power in a given bandwidth. Most commonly a reference bandwidth of 0.1 nm is used. This bandwidth is independent from the modulation format, the frequency and the receiver. For instance a OSNR of 20dB/0.1nm could be given, even the signal of 40 GBit DPSK would not fit in this bandwidth. OSNR is measured with a Optical Spectrum Analyzer

  1. ^ The connection between optical power and voltage in an imaging system is linear. This usually means that the SNR of the electrical signal is calculated by the 10 log rule. With an interferometric system, however, where interest lies in the signal from one arm only, the field of the electromagnetic wave is proportional to the voltage (assuming that the intensity in the second, the reference arm is constant). Therefore the optical power of the measurement arm is directly proportional to the electrical power and electrical signals from optical interferometry are following the 20 log rule.[1]
  2. ^ The exact methods may vary between fields. For example, if the signal data are known to be constant, then σ can be calculated using the standard deviation of the signal. If the signal data are not constant, then σ can be calculated from data where the signal is zero or relatively constant.
  3. ^ Often special filters are used to weight the noise: DIN-A, DIN-B, DIN-C, DIN-D, CCIR-601; for video, special filters such as comb filters may be used.
  4. ^ Maximum possible full scale signal can be charged as peak-to-peak or as RMS. Audio uses RMS, Video P-P, which gave +9 dB more SNR for video.
  1. ^ Michael A. Choma, Marinko V. Sarunic, Changhuei Yang, Joseph A. Izatt. Sensitivity advantage of swept source and Fourier domain optical coherence tomography. Optics Express, 11(18). Sept 2003.
  2. ^ D. J. Schroeder (1999). Astronomical optics (2nd ed.). Academic Press. p. 433. ISBN 9780126298109. 
  3. ^ Bushberg, J. T., et al., The Essential Physics of Medical Imaging, (2e). Philadelphia: Lippincott Williams Wilkins, 2006, p.280.
  4. ^ Rafael C. González, Richard Eugene Woods (2008). Digital image processing. Prentice Hall. p. 354. ISBN 013168728X. 
  5. ^ Tania Stathaki (2008). Image fusion: algorithms and applications. Academic Press. p. 471. ISBN 0123725291. 
  6. ^ Jitendra R. Raol (2009). Multi-Sensor Data Fusion: Theory and Practice. CRC Press. ISBN 1439800030. 
  7. ^ John C. Russ (2007). The image processing handbook. CRC Press. ISBN 0849372542. 
  8. ^ Bushberg, J. T., et al., The Essential Physics of Medical Imaging, (2e). Philadelphia: Lippincott Williams Wilkins, 2006, p.280. ISBN 0-683-30118-7
  9. ^ Defining and Testing Dynamic Parameters in High-Speed ADCsMaxim Integrated Products Application note 728
  10. ^ Fixed-Point vs. Floating-Point DSP for Superior AudioRane Corporation technical library