Search

Book Summary & Highlights: The Fourth Revolution: How the Infosphere is Reshaping Human Reality By Luciano Floridi

Book Summary & Highlights: The Fourth Revolution: How the Infosphere is Reshaping Human Reality By Luciano Floridi

image

Price

Pub Date:

2014

Amazon Summary

Who are we, and how do we relate to each other? Luciano Floridi, one of the leading figures in contemporary philosophy, argues that the explosive developments in Information and Communication Technologies (ICTs) is changing the answer to these fundamental human questions.

As the boundaries between life online and offline break down, and we become seamlessly connected to each other and surrounded by smart, responsive objects, we are all becoming integrated into an "infosphere". Personas we adopt in social media, for example, feed into our 'real' lives so that we begin to live, as Floridi puts in, "onlife". Following those led by Copernicus, Darwin, and Freud, this metaphysical shift represents nothing less than a fourth revolution.

"Onlife" defines more and more of our daily activity - the way we shop, work, learn, care for our health, entertain ourselves, conduct our relationships; the way we interact with the worlds of law, finance, and politics; even the way we conduct war. In every department of life, ICTs have become environmental forces which are creating and transforming our realities. How can we ensure that we shall reap their benefits? What are the implicit risks? Are our technologies going to enable and empower us, or constrain us? Floridi argues that we must expand our ecological and ethical approach to cover both natural and man-made realities, putting the 'e' in an environmentalism that can deal successfully with the new challenges posed by our digital technologies and information society.

About Author: Luciano Floridi

Author Presentations

Other Book Summaries

Key Ideas

The Fourth Revolution

Floridi's scheme has three periods in humanity's development: pre-history, before recorded information; history, when society was assisted by recorded information; and 'hyper-history', with society dependent on, and defined by, information and communication technologies. The move to hyper-history is paralleled by the development of the condition of 'on-life', whereby life is lived simultaneously online and off-line in an 'infosphere'. This is a dramatic change, and one which occurs only once in the life-time of a species. For the generation which has lived through it, it is hardly surprising to find new problems and issues arising, and information overload can be understood as one of these.

Contents

Time Hyperhistory

image
All members of the G7 group—namely Canada, France, Germany, Italy, Japan, the United Kingdom, and the United States of America—qualify as hyperhistorical societies because, in each country, at least 70 per cent of the Gross Domestic Product (GDP, the value of goods and services produced in a country) depends on intangible goods, which are information-related, rather than on material goods, which are the physical output of agricultural or manufacturing processes. Their economies heavily rely on information-based assets (knowledge-based economy), information-intensive services (especially business and property services, communications, finance, insurance, and entertainment), and information-oriented public sectors (especially education, public administration, and health care).
The life cycle of information (see Figure 4) typically includes the following phases: occurrence (discovering, designing, authoring, etc.), recording, transmission (networking, distributing, accessing, retrieving, etc.), processing (collecting, validating, merging, modifying, organizing, indexing, classifying, filtering, updating, sorting, storing, etc.), and usage (monitoring, modelling, analysing, explaining, planning, forecasting, decision-making, instructing, educating, learning, playing, etc.).
image

Size Of The Industry

We know that what our eyes can see in the world—the visible spectrum of the rainbow—is but a very small portion of the electromagnetic spectrum, which includes gamma rays, X-rays, ultraviolet, infrared, microwaves, and radio waves. Likewise, the data processing ‘spectrum’ that we can perceive is almost negligible compared to what is really going on in machine-to-machine and human–computer interactions. An immense number of ICT applications run an incalculable number of instructions every millisecond of our lives to keep the hyperhistorical information society humming. ICTs consume most of their MIPS to talk to each other, collaborate, and coordinate efforts, and put us as comfortably as possible in or on the loop, or even out of it, when necessary.

Machine-To-Machine Communication Is Growing Faster Than Human-To-Machine Communication

The number of connected devices per person will grow from 0.08 in 2003, to 1.84 in 2010, to 3.47 in 2015, to 6.58 in 2020. To our future historian, global communication on Earth will appear to be largely a non-human phenomenon, as Figure 9 illustrates.

image

Conclusion

The living generation is experiencing a transition from history to hyperhistory. Advanced information societies are more and more heavily dependent on ICTs for their normal functioning and growth. Processing power will increase, while becoming cheaper. The amount of data will reach unthinkable quantities. And the value of our network will grow almost vertically. However, our storage capacity (space) and the speed of our communications (time) are lagging behind. Hyperhistory is a new era in human development, but it does not transcend the spatio-temporal constraints that have always regulated our life on this planet. The question to be addressed next is: given all the variables we have seen in this chapter, what sort of hyperhistorical environment are we building for ourselves and for future generations? The short answer is: the infosphere.

Space Infosphere

Despite some important exceptions—especially vases and metal tools in ancient civilizations, engravings, and then books after Gutenberg—it was the Industrial Revolution that really marked the passage from a nominalist world of unique objects to a Platonic world of types of objects. Our industrial goods are all perfectly reproducible as identical to each other, therefore indiscernible, and hence pragmatically dispensable because they may be replaced without any loss in the scope of interactions that they allow. This is so much part of our culture that we expect ideal standards and strict uniformity of types to apply even when Nature is the source. In the food industry in the UK, for example, up to 40 per cent of all the edible produce never reaches the market but is wasted because of aesthetic standards, e.g., size, shape, and absence of blemish criteria in fruit and vegetables. This because retailers know that we, the shoppers, will not buy unsightly produce. 42 Similarly, in the fashion industry, when the human body is in question, the dialectics of being uniquely like everybody else joins forces with the malleability of the digital to give rise to the common phenomenon of ‘airbrushing’. Digital photographs are regularly and routinely retouched in order to adapt the appearance of portrayed people to unrealistic and misleading stereotypes, with unhealthy impact on customers’ expectations, especially teenagers. The discussion of legal proposals to restrain such practices has been going on for years in France and in the UK, while evidence that warning labels and disclaimers would make a difference in the public perception is still debated. 43 When our ancestors bought a horse, they bought this horse or that horse, not ‘the’ horse. Today, we find it utterly obvious and non-problematic that two cars may be virtually identical and that we are invited to test-drive and buy the model rather than an individual ‘incarnation’ of it. We buy the type not the token. When something is intrinsically wrong with your car, it may be a problem with the model, affecting million of customers. In 1981, the worst car recall recorded by the automobile industry so far involved 21 million Ford, Mercury, and Lincoln vehicles. 44 Quite coherently, we are quickly moving towards a commodification of objects that considers repair as synonymous with replacement, even when it comes to entire buildings. Such a shift in favour of types of objects has led, by way of compensation, to a prioritization of informational branding—a process comparable to the creation of cultural accessories and personal philosophies 45 —and of reappropriation. The person who puts a sticker in the window of her car, which is otherwise perfectly identical to thousands of others, is fighting an anti-Platonic battle in support of a nominalist philosophy. The same holds true for the student plastering his laptop with stickers to personalize it. The information revolution has further exacerbated this process. Once our window-shopping becomes Windows-shopping and no longer means walking down the street but browsing the Web, the processes of dephysicalization and typification of individuals as unique and irreplaceable entities may start eroding our sense of personal identity as well. We may risk behaving like, and conceptualizing ourselves as, mass-produced, anonymous entities among other anonymous entities, exposed to billions of other similar individuals online. We may conceive each other as bundles of types, from gender to religion, from family role to working position, from education to social class. And since in the infosphere we, as users, are increasingly invited, if not forced, to rely on indicators rather than actual references—we cannot try all the restaurants in town, the references, so we trust online recommendations, the indicators of quality—we share and promote a culture of proxies. LinkedIn profiles stand for individuals, the number of linked pages stand for relevance and importance, ‘likes’ are a proxy for pleasant, TripAdvisor becomes a guide to leisure. Naturally, the process further fuels the advertisement industry and its new dialectics of virtual materialism. Equally naturally, the process ends up applying to us as well. In a proxy culture, we may easily be de-individualized and treated as a type (a type of customer, a type of driver, a type of citizen, a type of patient, a type of person who lives at that postal code, who drives that type of car, who goes to that type of restaurant, etc.). Such proxies may be further used to reidentify us as specific consumers for customizing purposes. I do not know whether there is anything necessarily unethical with all this, but it seems crucial that we understand how ICTs are significantly affecting us, our identities, and our self-understanding.

Identity Onlife

Conclusion

In this chapter and in Chapters 1 and 2, I have sketched how ICTs have brought about some significant transformations in our history (hyperhistory), in our environment (infosphere), and in the development of our selves (the onlife experience). At the roots of such transformations, there seems to be a deep philosophical change in our views about our ‘special’ place and role in the universe. It is a fourth revolution in our self-understanding.

Self-Understanding The Four Revolutions

Conclusion

In light of the fourth revolution, we understand ourselves as informational organisms among others. We saw in Chapter 2 that, in the long run, de-individualized (you become ‘a kind of’) and reidentified (you are seen as a specific crossing point of many ‘kinds of’) inforgs may be treated like commodities that can be sold and bought on the advertisement market. We may become like Gogol’s dead souls, but with wallets. 11 Our value depends on our purchasing power as members of a customer set, and the latter is only a click away. This is all very egalitarian: nobody cares who you are on the Web, as long as your ID is that of the right kind of shopper. There is no stock exchange for these dead souls online, but plenty of Chichikovs (the main character in Gogol’s novel) who wish to buy them. So what is the dollar of an inforg worth? As usual, if you buy them in large quantities you get a discount. So let’s have a look at the wholesale market. In 2007, Fox Interactive Media signed a deal with Google to install the famous search engine (and ancillary advertising system) across its network of Internet sites, including the highly popular (at the time) MySpace. Cost of the operation: $900 million. 12 Estimated number of user profiles in MySpace: nearly 100 million at the time. So, average value of a digital soul: $9 at most, but only if it fitted the high-quality profile of a MySpace.com user. As Sobakievich, one of the characters in Gogol’s novel, would say: It’s cheap at the price. A rogue would cheat you, sell you some worthless rubbish instead of souls, but mine are as juicy as ripe nuts, all picked, they are all either craftsmen or sturdy peasants. 13 The ‘ripe nuts’ are what really count, and, in MySpace, they were simply self-picked: tens of millions of educated people, with enough time on their hands (they would not be there otherwise), sufficiently well-off, English-speaking, with credit cards and addresses in deliverable places…it makes any advertiser salivate. Fast-forward five years. The market is bigger, the nuts are less ripe, and so the prices are even lower. In 2012, Facebook filed for a $5 billion initial public offering. 14 Divide that by its approximately 1 billion users at that time, and you have a price of $5 per digital soul. An almost 50 per-cent discount, yet still rather expensive. Consider that, according to the Financial Times, 15 in 2013 most people’s profile information (an aggregate of age, gender, employment history, personal ailments, credit scores, income details, shopping history, locations, entertainment choices, address, and so forth) sold for less than $1 in total per person. For example, income details and shopping histories sold for $0.001 each. The price of a single record drops even further for bulk buyers. When I ran the online calculator offered by the Financial Times, the simulation indicated that ‘marketers would pay approximately for your data: $0.3723’. As a digital soul, in 2013, I was worth about a third of the price of a song on iTunes. You can imagine my surprise when, in 2013, Yahoo bought Tumblr (a blogging platform) for $1.1 billion: with 100 million users, that was, $11 per digital soul. I suspect it might have been overpriced. 16 From Gogol to Google, a personalizing—recall the person who put a sticker in the window of her car, at the end of Chapter 2—reaction to such massive customization is natural, but also tricky. We saw that we could construct, self-brand, and reappropriate ourselves in the infosphere by using blogs and Facebook entries, Google homepages, YouTube videos, and Flickr albums; by sharing choices of food, shoes, pets, places we visit or like, types of holidays we take and cars we drive, instagrams, and so forth; by rating and ranking anything and everything we click into. It is perfectly reasonable that Second Life should be a paradise for fashion enthusiasts of all kinds. Not only does it provide a flexible platform for designers and creative artists, it is also the right context in which digital souls (avatars) intensely feel the pressure to obtain visible signs of self-identity. After all, your free avatar looks like anybody else. Years after the launch of Second Life, there is still no inconsistency between a society so concerned about privacy rights and the success of social services such as Facebook. We use and expose information about ourselves to become less informationally anonymous and indiscernible. We wish to maintain a high level of informational privacy almost as if that were the only way of saving a precious capital that can then be publicly invested (squandered, pessimists would say) by us in order to construct ourselves as individuals easily discernible and uniquely reidentifiable. Never before has informational privacy played such a crucial role in the lives of millions of people. It is one of the defining issues of our age. The time has come to have a closer look at what we actually mean by privacy after the fourth revolution.

Privacy

Conclusion

When Odysseus returns to Ithaca, he is identified four times. Argos, his old dog, is not fooled and recognizes him despite his disguise as a beggar, because of his smell. Then Eurycleia, his wet-nurse, while bathing him, recognizes him by a scar on his leg, inflicted by a boar when hunting. He then proves to be the only man capable of stringing Odysseus’ bow. All these are biometric tests no Arnaud du Tilh would have passed. But then, Penelope is no Bertrande either. She does not rely on any ‘unique identifier’ but finally tests Odysseus by asking Eurycleia to move the bed in their wedding-chamber. Odysseus hears this and protests that it is an impossible task: he himself had built the bed around a living olive tree, which is now one of its legs. This is a crucial piece of information that only Penelope and Odysseus ever shared. By naturally relying on it, Odysseus restores Penelope’s full trust. She recognizes him as the real Odysseus not because of who he is or how he looks, but in a constitutive sense, because of the information that only they have in common and that constitutes both of them as a unique couple. Through the sharing of this intimate piece of information, which is part of who they are as a couple, identity is restored and the supra-agent is reunited. There is a line of continuity between the roots of the olive tree and the married couple. For Homer, their bond was like-mindedness (Ὁμοφροσύνη); to Shakespeare, it was the marriage of true minds. To us, it is informational privacy that admits no informational friction.

Intelligence

Conclusion

Until recently, the widespread impression was that the process of adding to the mathematical book of nature (inscription) required the feasibility of productive, cognitive AI, in other words, the strong programme. After all, developing even a rudimentary form of non-biological intelligence may seem to be not only the best but perhaps the only way to implement ICTs sufficiently adaptive and flexible to deal effectively with a complex, ever-changing, and often unpredictable—when not unfriendly—environment. What Descartes acknowledged to be an essential sign of intelligence—the capacity to learn from different circumstances, adapt to them, and exploit them to one’s own advantage—would be a priceless feature of any appliance that sought to be more than merely smart. Such an impression is not incorrect, but it is distracting because, while we were unsuccessfully pursuing the inscription of strong, productive AI into the world, we were actually changing the world to fit light, reproductive AI. ICTs are not becoming more intelligent while making us more stupid. Instead, the world is becoming an infosphere increasingly well adapted to ICTs’ limited capacities. Recall how we set up a border wire so that the robot could safely and successfully mow the lawn. In a comparable way, we are adapting the environment to our smart technologies to make sure the latter can interact with it successfully. We are, in other words, wiring or rather enveloping the world, as I shall argue in Chapter 7.

Agency

Conclusion

Light AI, smart agents, artificial companions, Semantic Web, or Web 2.0 applications are part of what I have described as a fourth revolution in the long process of reassessing humanity’s fundamental nature and role in the universe. The deepest philosophical issue brought about by ICTs concerns not so much how they extend or empower us, or what they enable us to do, but more profoundly how they lead us to reinterpret who we are and how we should interact with each other. When artificial agents, including artificial companions and software-based smart systems, become commodities as ordinary as cars, we shall accept this new conceptual revolution with much less reluctance. It is humbling, but also exciting. For in view of this important evolution in our self-understanding, and given the sort of ICT-mediated interactions that humans will increasingly enjoy with other agents, whether natural or synthetic, we have the unique opportunity of developing a new ecological approach to the whole of reality. As I shall argue in Chapter 10, how we build, shape, and regulate ecologically our new infosphere and ourselves is the crucial challenge brought about by ICTs and the fourth revolution. Recall Beatrice’s question at the beginning of Much Ado About Nothing: ‘Who is his companion now?’ She would not have understood ‘an artificial agent’ as an answer to her question. I suspect future generations will find it unproblematic. It is going to be our task to ensure that the transition from her question to their answer will be as acceptable as possible. Such a task is both ethical and political and, as you may expect by now, this is the topic of Chapter 8.

Politics

Conclusion

Six thousand years ago, humanity witnessed the invention of writing and the emergence of the conditions of possibility that lead to cities, kingdoms, empires, sovereign states, nations, and intergovernmental organizations. This is not accidental. Prehistoric societies are both ICT-less and stateless. The state is a typical historical phenomenon. It emerges when human groups stop living a hand-to-mouth existence in small communities and begin to live a mouth-to-hand one. Large communities become political societies, with division of labour and specialized roles, organized under some form of government, which manages resources through the control of ICTs, including that special kind of information called ‘money’. From taxes to legislation, from the administration of justice to military force, from census to social infrastructure, the state was for a long time the ultimate information agent and so history, and especially modernity, is the age of the state. Almost halfway between the beginning of history and now, Plato was still trying to make sense of both radical changes: the encoding of memories through written symbols and the symbiotic interactions between the individual and the polis-state. In fifty years, our grandchildren may look at us as the last of the historical, state-organized generations, not so differently from the way we look at the Amazonian tribes mentioned in Chapter 1, as the last of the prehistorical, stateless societies. It may take a long while before we come to understand in full such transformations. And this is a problem, because we do not have another six millennia in front of us. We are playing a technological gambit with ICTs, and we have only a short time to win the environmental game, for the future of our planet is at stake, as I shall argue in Chapter 9.

Environment

Conclusion

Highlights

Topical

Stats

A few years ago, researchers at Berkeley’s School of Information4 estimated that humanity had accumulated approximately 12 exabytes5 of data in the course of its entire history until the commodification of computers, but that it had already reached 180 exabytes by 2006. According to a more recent study,6 the total grew to over 1,600 exabytes between 2006 and 2011, thus passing the zettabyte (1,000 exabytes) barrier. This figure is now expected to grow fourfold approximately every three years, so that we shall have 8 zettabytes of data by 2015. Every day, enough new data are being generated to fill all US libraries eight times over. Of course, armies of ICT devices are constantly working to keep us afloat and navigate through such an ocean of data. These are all numbers that will keep growing quickly and steadily for the foreseeable future, especially because those very devices are among the greatest sources of further data, which in turn require, or simply make possible, more ICTs.

Floridi, Luciano. The Fourth Revolution (p. 13). OUP Oxford. Kindle Edition.

Reframing Info Overwhelm

Consider the problem first. ‘Big data’ came to be formulated after other buzz expressions, such as ‘infoglut’ or ‘information overload’, began to fade away, yet the idea remains the same. It refers to an overwhelming sense that we have bitten off more than we can chew, that we are being force-fed like geese, that our intellectual livers are exploding. This is a mistake. Yes, we have seen that there is an obvious exponential growth of data on an ever-larger number of topics, but complaining about such over-abundance would be like complaining about a banquet that offers more than we can ever eat. Data remain an asset, a resource to exploit. Nobody is forcing us to digest every available byte. We are becoming data-richer by the day; this cannot be the fundamental problem. Since the problem is not the increasing wealth of data that is becoming available, clearly the solution needs to be reconsidered: it cannot be merely how many data we can technologically process. We saw that, if anything, more and better techniques and technologies are only going to generate more data. If the problem were too many data, more ICTs would only exacerbate it. Growing bigger digestive systems, as it were, is not the way forward. The real epistemological problem with big data is small patterns. Precisely because so many data can now be generated and processed so quickly, so cheaply, and on virtually anything, the pressure both on the data nouveau riche, such as Facebook or Walmart, Amazon or Google, and on the data old money, such as genetics or medicine, experimental physics or neuroscience, is to be able to spot where the new patterns with real added-value lie in their immense databases, and how they can best be exploited for the creation of wealth, the improvement of human lives, and the advancement of knowledge. This is a problem of brainpower rather than computational power. Small patterns matter because, in hyperhistory, they represent the new frontier of innovation and competition, from science to business, from governance to social policies, from security to safety. In a free and open marketplace of ideas, if someone else can exploit the small patterns earlier and more successfully than you do, you might quickly be out of business, miss a fundamental discovery and the corresponding Nobel, or put your country in serious danger. Small patterns may also be risky, because they push the limit of what events or behaviours are predictable, and therefore may be anticipated. This is an ethical problem. Target, an American retailing company, relies on the analysis of the purchasing patterns of 25 products in order to assign each shopper a ‘pregnancy prediction’ score, estimate her due date, and send coupons timed to specific stages of her pregnancy. In a notorious case,8 it caused some serious problems when it sent coupons to a family in which the teenager daughter had not informed her parents about her new status. I shall return to this sort of problem in Chapters 3 and 4, when discussing personal identity and privacy. Unfortunately, small patterns may be significant only if properly aggregated, correlated, and integrated—for example in terms of loyalty cards and shopping suggestions—compared, as when banks utilize big data to fight fraudsters, and processed in a timely manner, as in financial markets. And because information is indicative also when it is not there (the lack of some data may also be informative in itself), small patterns can also be significant if they are absent. Sherlock Holmes solves one of his famous cases because of the silence of the dog, which should have barked. If big data are not ‘barking’ when they should, something is going on, as the financial watchdogs (should) know, for example.

Floridi, Luciano. The Fourth Revolution (pp. 16-17). OUP Oxford. Kindle Edition.

Most Popular Highlights From Kindle Users

Thanks to ICTs, we have entered the age of the zettabyte. Our generation is the first to experience a zettaflood, to introduce a neologism to describe this tsunami of bytes that is submerging our environments. In other contexts, this is also known as ‘big data’ (Figure 10). Despite the importance of the phenomenon, it is unclear what exactly the term ‘big data’ means. The temptation, in similar cases, is to adopt the approach pioneered by Potter Stewart, United States Supreme Court Justice, when asked to describe pornography: difficult to define, but ‘I know when I see it’. Other strategies have been much less successful. For example, in the United States, the National Science Foundation (NSF) and the National Institutes of Health (NIH) have identified big data as a programme focus. One of the main NSF–NIH interagency initiatives addresses the need for core techniques and technologies for advancing big data science and engineering.

Second-Order Technologies

Second-order technologies are those relating users no longer to nature but to other technologies; that is, they are technologies whose prompters are other technologies (see Figure 14).
image
This is a good reason not to consider the concept of a tool or that of a consumer good as being coextensive with that of first-order technology. Think of the homely example of a humble screwdriver. Of course, it is a tool, but it is between you and, you guessed it, a screw, which is actually another piece of technology, which in its turn (pun irresistible) is between the screwdriver and, for example, two pieces of wood. The same screwdriver is also more easily understood as an instance of a capital good, that is, a good that helps to produce other goods. Other examples of such second-order technologies include keys, whose prompters are obviously locks, and vehicles whose users are (still) human and whose prompters are paved roads, another piece of technology. Some first-order technologies (recall: these are the ones that satisfy the scheme humanity–technology–nature) are useless without the corresponding second-order technologies to which they are coupled. Roads do not require cars to be useful, but screws call for screwdrivers. And second-order technologies imply a level of mutual dependency with first-order technologies (a drill is useless without the drill bits) that is the hallmark of some degree of specialization, and hence of organization. You either have nuts and bolts or neither.

Third-Order Technologies

For technology starts developing exponentially once its in-betweenness relates technologies-as-users to other technologies-as-prompters, in a technology–technology–technology scheme (see Figure 15). Then we, who were the users, are no longer in the loop, but at most on the loop: pilots still fly drones actively, with a stick and a throttle, but operators merely control them with a mouse and a keyboard. 3 Or perhaps we are not significantly present at all, that is, we are out of the loop entirely, and enjoy or simply rely on such technologies as (possibly unaware) beneficiaries or consumers. It is not an entirely unprecedented phenomenon. Aristotle argued that slaves were ‘living tools’ for action:
image
"An article of property is a tool for the purpose of life, and property generally is a collection of tools, and a slave is a live article of property. 4 […] These considerations therefore make clear the nature of the slave and his essential quality; one who is a human being belonging by nature not to himself but to another is by nature a slave, and a human being belongs to another if, although a human being, he is a piece of property, and a piece of property is an instrument for action separate from its owner.5" Clearly, such ‘living tools’ could be ‘used’ as third-order technology and place the masters off the loop. Today, this is a view that resonates with many metaphors about robots and other ICT devices as slaves. Of course, the only safe prediction about forecasting the future is that it is easy to get it wrong. Who would have thought that, twenty years after the flop of Apple’s Newton 6 people would have been queuing to buy an iPad? Sometimes, you just have to wait for the right apple to fall on your head. Still, ‘the Internet of things’, in which third-order technologies work independently of human users, seems a low-hanging fruit sufficiently ripe to be worth monitoring. Some pundits have been talking about it for a while now. The next revolution will not be the vertical development of some unchartered new technology, but horizontal. For it will be about connecting anything to anything (a2a), not just humans to humans. One day, you-name-it 2.0 will be passé, and we might be thrilled by a2a technologies. I shall return to this point in Chapter 7. For the moment, the fact that the Newton was advertised as being able to connect to a printer was quite amazing at the time, but rather trivial today. Imagine a world in which your car autonomously checks your electronic diary and reminds you, through your digital TV, that you need to get some petrol tomorrow, before your long-distance commuting. All this and more is already easily feasible. The greatest obstacles are a lack of shared standards, limited protocols, and hardware that is not designed to be fully modular with the rest of the infosphere. Anyone who could invent an affordable, universal appliance that may be attached to our billions of artefacts in order to make them interact with each other would soon be a billionaire. It is a problem of integration and defragmentation, which we currently solve by routinely forcing humans to work like interfaces. We operate the fuel dispensers at petrol stations, we translate the GPS’s instructions into driving manoeuvres, and we make the grocery supermarket interact with our refrigerator.
The development of technologies, from first- to second- and finally to third-order, poses many questions. Two seem to be most relevant in the context of our current explorations. First, if technology is always in-between, what are the new related elements when ICTs work as third-order technologies? To be more precise, for the first time in our development, we have technologies that can regularly and normally act as autonomous users of other technologies, yet what is ICTs’ in-between relationship to us, no longer as users but as potential beneficiaries who are out of the loop? A full answer must wait until the following chapters. Here, let me anticipate that, precisely because ICTs finally close the loop, and let technology interact with technology through itself, one may object that the very question becomes pointless. With the appearance of third-order technologies all the in-betweenness becomes internal to the technologies, no longer our business. We shall see that such a process of technological ‘internalization’ has raised concern that ICTs may end up shaping or even controlling human life. At the same time, one may still reply that ICTs, as third-order technologies that close the loop, internalize the technological in-betweenness but generate a new ‘outside’, for they create a new space (think for example of cyberspace), which is made possible by the loop, that relies on the loop to continue to exist and to flourish, but that is not to be confused with the space inside the loop. Occurrences of such spaces are not socially unprecedented. At different times and in different societies, buildings have been designed with areas to be used only by slaves or servants for the proper, invisible functioning of the whole house-system, from the kitchen and the canteen to separate stairs and corridors. Viewers of the popular TV drama Downton Abbey will recognize the ‘upstairs, downstairs’ scenario. What is unprecedented is the immense scale and pace at which the whole of human society is now migrating to this out-of-the-loop space, whenever possible. Second, if technology is always in-between, then what enables such in-betweenness to be successful? To put it slightly differently: how does technology interact with the user and the prompter? The one-word answer is: interfaces, the topic of the next section.

ICTs as interpreting and creating technologies

Today, when we think of technology in general, ICTs and their ubiquitous, user-friendly interfaces come immediately to mind. This is to be expected. In hyperhistorical societies, ICTs become the characterizing first-, second-, and third-order technologies. We increasingly interact with the world and with our technologies through ICTs, and ICTs are the technologies that can, and tend to, interact with themselves, and invisibly so. Also to be expected is that, as in the past, the dominant technology of our time is having a twofold effect. On the one hand, by shaping and influencing our interactions with the world, first- and second-order ICTs invite us to interpret the world in ICT-friendly terms, that is, informationally. On the other hand, by creating entirely new environments, which we then inhabit (the out-of-the-loop experience, functionally invisible by design), third-order ICTs invite us to consider the intrinsic nature of increasing portions of our world as being inherently informational. In short, ICTs make us think about the world informationally and make the world we experience informational. The result of these two tendencies is that ICTs are leading our culture to conceptualize the whole reality and our lives within it in ICT-friendly terms, that is, informationally, as I shall explain in this section.
The most obvious way in which ICTs are transforming the world into an infosphere concerns the transition from analogue to digital and then the ever-increasing growth of the informational spaces within which we spend more and more of our time. Both phenomena are familiar and require no explanation, but a brief comment may not go amiss. This radical transformation is also due to the fundamental convergence between digital tools and digital resources. The intrinsic nature of the tools (software, algorithms, databases, communication channels and protocols, etc.) is now the same as, and therefore fully compatible with, the intrinsic nature of their resources, the raw data being manipulated. Metaphorically, it is a bit like having pumps and pipes made of ice to channel water: it is all H2O anyway. If you find this questionable, consider that, from a physical perspective, it would be impossible to distinguish between data and programs in the hard disk of your computer: they are all digits anyway.
In the (fast approaching) future, more and more objects will be third-order ITentities able to monitor, learn, advise, and communicate with each other. A good example is provided by RFID (Radio Frequency IDentification) tags, which can store and remotely retrieve data from an object and give it a unique identity, like a barcode. Tags can measure 0.4 mm2 and are thinner than paper. Incorporate this tiny microchip in everything, including humans and animals, and you have created ITentities. This is not science fiction. According to an early report by Market Research Company In-Stat, 14 the worldwide production of RFID increased more than 25-fold between 2005 and 2010 to reach 33 billion tags. A more recent report by IDTechEx 15 indicates that in 2012 the value of the global RFID market was $7.67 billion, up from $6.51 billion in 2011. The RFID market is forecast to grow steadily over the next decade, rising fourfold in this period to $26.19 billion in 2022.
In 2011, c.21 per cent of the EU population used a laptop to access the Internet, via wireless away from home or work. Source: European Commission, Digital Agenda for Europe.
In 2011, c.21 per cent of the EU population used a laptop to access the Internet, via wireless away from home or work. Source: European Commission, Digital Agenda for Europe.
The combination of virtualization of services and virtual assets offers an unprecedented opportunity. Nowadays it is still common and easy to insure a machine, like a laptop, on which the data are stored, but not the data that it stores. This is because, although data may be invaluable and irreplaceable, it is obvious that they are also perfectly clonable at a negligible cost, contrary to physical objects, so it would be hard for an insurer to be certain about their irrecoverable loss or corruption. However, cloud computing decouples the physical possession of data (by the provider) from their ownership (by the user), and once it is the provider that physically possesses the data and is responsible for their maintenance, the user/owner of such data may justifiably expect to see them insured, for a premium of course, and to be compensated in case of damage, loss, or downtime. Users should be able to insure their data precisely because they own them but they do not physically possess them. ‘Cyber insurance’ has been around for many years, 41 it is the right thing to do, but it is only with cloud computing that it may become truly feasible. We are likely to witness a welcome shift from hardware to data in the insurance strategies used to hedge against the risk of irreversible losses or damages.

Most Popular Highlights From Goodreads Users

"Technologies as users interacting with other technologies as prompters, through other in-between technologies: this is another way of describing hyperhistory as the stage of human development”
“One of the most obvious features that characterizes any technology is its in-betweenness. Suppose Alice lives in Rio de Janeiro, not in Oxford. A hat is a technology between her and the sunshine. A pair of sandals is a technology between her and the hot sand of the beach on which she is walking.”
“ICTs are modifying the very nature of, and hence what we mean by, reality, by transforming it into an infosphere. Infosphere is a neologism coined in the seventies. It is based on ‘biosphere’, a term referring to that limited region on our planet that supports life. It is also a concept that is quickly evolving.”
“The great opportunity offered by ICTs comes with a huge intellectual responsibility to understand them and take advantage of them in the right way.”
“Recorded memories tend to freeze and reinforce the nature of their subject. The more memories we accumulate and externalize, the more narrative constraints we provide for the construction and development of our personal identities. Increasing our memories also means decreasing the degree of freedom we might enjoy in redefining ourselves. Forgetting is part of the process of self-construction. A potential solution, for generations to come, may be to be thriftier with anything that tends to crystallize the nature of the self, and more adept in handling new or refined skills of self-construction. Capturing,”
“The technophile and the technophobe ask the same question. What's next?”
“When technologies are in-between human users and natural prompters, we may qualify them as first-order (Figure 13). Listing first-order technologies is simple. The ones mentioned earlier all qualify. More can easily be added, such as the plough, the wheel, or the umbrella. The axe is probably the first and oldest kind of first-order technology. Nowadays, a wood-splitting axe is still a first-order technology between you, the user, and the wood, the prompter. A saddle is between you and a horse.”
“In short, human intelligent design (pun intended) should play a major role in shaping the future of our interactions with each other, with forthcoming technological artefacts, and with the infosphere we share among us and with them. After all, it is a sign of intelligence to make stupidity work for you.”
“In 2011, the total world wealth1 was calculated to be $231 trillion, up from $195 trillion in 2010.2 Since we are almost 7 billion, that was about $33,000 per person, or $51,000 per adult, as the report indicates. The figures give a clear sense of the level of inequality. In the same year, we spent $498 billion on advertisements.3”
“The illusion that there might be a single, correct, absolute answer independently of context, purpose and perspective, that is independently of the relevant interface, leads to paradoxical nonsense.”
“The design of good interfaces takes time and ingenuity.”

Recommended Reading By The Author

Preface

If you wish to know more about the philosophers mentioned in this book, you may consider reading Magee (2000), very accessible. Floridi (2011) and Floridi (2013) are graduate-level treatments of the foundations of the philosophy of information and of information ethics.

Chapter 1: Time Hyperhistory

On writing as a technology and on the interplay between orality and literacy, the now classic reference is Ong (1988). Claude Shannon (1916–2001) is the father of information theory. His seminal work, Shannon and Weaver (1949, rep. 1998), requires a good background in mathematics and probability theory. An accessible introduction to information theory is still Pierce (1980). I have covered topics discussed in this chapter in Floridi (2010a), where the reader can also find a simple introduction to information theory. Gleick (2011) is an enjoyable ‘story’ of information. Ceruzzi (2012) provides a short introduction to the history of computing, from its beginning to the Internet. Caldarelli and Catanzaro (2012) give a short introduction to networks. On big data, a good survey is O’Reilly Media (2012), the Kindle edition is free. Mayer-Schönberger and Cukier (2013) is accessible. On the postmodern society, Lyotard (1984) is essential reading, philosophically demanding but also rewarding. On the network society, Manuel Castells (2000), the first volume of his trilogy, has shaped the debate. The information society produces much information about itself. Among the many valuable, yearly reports, freely available online, one may consult Measuring the Information Society, which includes the ICT Development Index and the ICT Price Basket, two benchmarks useful for monitoring the development of the information society worldwide; the Global Information Technology Report, produced by the World Economic Forum in cooperation with INSEAD, which covers 134 economies worldwide and is acknowledged to be the most comprehensive and authoritative international assessment of the impact of ICT on countries’ development and competitiveness; the International Telecommunication Union Statistics, which collects data about connectivity and availability of telecommunication services worldwide; and the Digital Planet report, published by World Information Technology and Service Alliance, which contains projections on ICT spending. Finally, Brynjolfsson and McAfee (2011) analyse how ICTs affect the job market, transform skills, and reshape the evolution of human labour. They do so from an American perspective, but their insights are universal, the scholarship admirable, and the recommendations on how machines and humans may collaborate quite convincing.

Chapter 2: Space

A useful, short, but comprehensive overview of the history of technology is offered by Headrick (2009). For a textbook-like approach, oriented towards the interactions between technology and science, a good starting point is McClellan and Dorn (2006). Shumaker et al. (2011) provide an important reference in the animal tool-making behaviour literature; the first edition published in 1980 was influential. A first and easy introduction to the history of inventions is offered by Challoner (2009), part of the 1001 series. The history of interfaces is ripe for a good introductory book, as the ones that I know are all technical. On the visualization of information, a classic is Tufte (2001), which could be accompanied by McCandless (2012). Design is another immense field of studies. As a starting point, one may still choose Norman (1998) although it is now slightly outdated (it is basically a renamed version of The Psychology of Everyday Things, published in 1989 by the same author). Millenials are described in Howe and Strauss (2000), but see also Palfrey and Gasser (2008) on digital natives. On globalization, I would recommend another title in the Very Short Introductions series, by Steger (2003). Some of the most important ideas—including that of authenticity—concerning the influence of mechanical reproduction of objects on our understanding and appreciation of their value are discussed in The Work of Art in the Age of Mechanical Reproduction, the classic and influential work by Walter Benjamin; for a recent translation see W. Benjamin (2008).

Chapter 3: Identity

This chapter is loosely based on chapter 11 in Floridi (2013). On the philosophy of personal identity, a rigorous and accessible introduction is Noonan (2003). A simpler overview of the philosophy of mind is offered by Feser (2006). On multi-agent system, a great book is Wooldridge (2009). Weinberger (2011) is a valuable book on how ICTs are changing knowledge and its processes. I would also strongly recommend on similar themes Brown and Duguid (2002). Floridi (2014) is a collection of essays addressing the onlife experience. Sainsbury (1995) is the standard reference for a scholarly treatment of paradoxes. If you wish to read something lighter and more entertaining Smullyan (1980) is still a good choice.

Chapter 4: Self-Understanding

This chapter in loosely based on chapter 1 in Floridi (2013). The idea of the first three revolutions is introduced by Freud (1917), for a scholarly analysis see Weinert (2009). On Alan Turing, an excellent introduction is still Hodges (1992), which is essentially a reprint of the 1983 edition. On Turing’s influence, Bolter (1984) is a gem that has been unfairly forgotten. The view that we may be becoming cyborgs is articulated by Clark (2003). Davis (2000) is a clear and accessible overview of the logical, mathematical, and computational ideas that led from Leibniz to Turing. Goldstine (1993) has become almost a classic reference for the history of the computer, first published in 1972; the fifth paperback printing (1993) contains a new preface.

Chapter 5: Privacy

This chapter in loosely based on chapter 12 in Floridi (2013). Wacks (2010) offers a short introduction to privacy. For a more philosophical treatment, including the analysis of privacy in public spaces, see Nissenbaum (2010). A sophisticated analysis of the networked self is offered by Cohen (2012). For a lively discussion of security issues and how to balance them with civil rights see Schneier (2003).

Chapter 6: Intelligence

Turing (2004) is a collection of his most important writings. It is not for the beginner, who may wish to start by reading Copeland (2012). Shieber (2004) contains an excellent collection of essays on the Turing Test. Negnevitsky (2011) is a simple and accessible introduction to artificial intelligence, lengthy but also modular. Norbert Wiener (1894–1964) was the father of cybernetics. He wrote extensively and insightfully on the relations between humanity and its new machines. His three works, Wiener (1954, 1961, 1964), are classics not to be missed. Winfield (2012) is a short introduction to robotics. A graduate-level discussion of the symbol grounding problem can be found in Floridi (2011). Two great and influential works that discuss the nature of artificial intelligence from different perspectives are Weizenbaum (1976), the designer of ELIZA, and Simon (1996). A slightly dated but still useful criticism of strong AI is provided by Dreyfus (1992).

Chapter 7: Agency

Han (2011) is an accessible text on Web 2.0, Antoniou (2012) is a more demanding introduction to the Semantic Web. Dijck (2013) offers a critical reconstruction of social media. For a detailed critique of the Semantic Web see Floridi (2009).

Chapter 8: Politics

This chapter is loosely based on Floridi (2014). Linklater (1998) gives an account of post-Westphalian societies and the ethical challenges ahead. On Bretton Woods and the emergence of our contemporary financial and monetary system, see Steil (2013). Clarke and Knake (2010) approach the problems of cyberwar and cyber security from a political perspective that would still qualify as ‘historical’ within this book, but it is helpful. Floridi and Taddeo (2014) is a collection of essays exploring the ethics of cyberwar. Floridi (2013) offers a foundational analysis of information ethics. An undergraduate-level introduction to problems and theories in information and computer ethics is Floridi (2010b). On politics and the information society, two recommendable readings are Mueller (2010) and Brown and Marsden (2013). The idea that there are four major regulators of human behaviour—law, norms, market, and architecture—was influentially developed by Lessig (1999), see also Lessig (2006).

Chapter 9: Environment

To understand the nature and logic of risk, a good starting point is the short introduction by Fischhoff and Kadvany (2011). Although Hird (2010) is not aimed at the educated public but more to CEOs of big companies, he offers a good overview of green computing, its problems and advantages. N. Carter (2007) guides the reader through environmental philosophy and green political thinking, environmental parties and movements, as well as policymaking and environmental issues.

Chapter 10: Ethics

The following four books are a bit demanding but deserve careful reading, if you wish to understand the problems discussed in this book in more depth:

  • Wiener (1954)
  • Weizenbaum (1976)
  • Lyotard (1984)
  • Simon (1996).

They belong to different intellectual traditions. Each of them has profoundly influenced your author.