In the previous chapter, our foray into building robots was mainly focused on connecting inputs (sensors for motion, proximity, etc.) and outputs (motors, displays, LEDs, etc.). In this chapter, we’ll expand our work with single robotic devices to use robots as building blocks for shared experiences.
JavaScript was born in a web browser. What happens when web interfaces and robots merge? To give you some ideas, this chapter will introduce concepts and provide context to help address human needs with new technologies. This is important but hard to do. A famous example is the Apple Newton handheld device that was developed in 1993, long before smartphones became popular. The device was discontinued in 1998, but by adding a web browser, music, and more features, smartphones are estimated to populate the pockets of around two billion people today.
This chapter explores a few perspectives on the increasingly connected world we’re building, and gives some example projects on how you can become an active participant in creating this connected society.
If you use Twitter or Facebook, you can see shared experiences as the connections you make when you “like” or “retweet” messages in your social network. Shared experiences allow humans to connect and form new kinds of communities. With the help of robots, shared experiences get physical.
For example, you could build a simple light setup that flashes when someone tweets at you. Or, with complex devices, you can track the flow of people, things, and information around you or across the Earth.
The idea of connecting humans through robotic devices has earlier origins than Facebook. The science behind feedback systems is called cybernetics, and it emerged in the 1940s with new kinds of radar technologies. Broadly speaking, cybernetic researchers study systems between man and machines. These systems may be distributed around the world. The idea that they work in concert often sounds like science fiction. Yet, if you look closely, a connected world is already in place:
Every time you swipe a credit card, you authorize yourself with a connected network and your digital balance changes.
Google may do analytics on search terms, but companies like Salesforce and NetSuite move the world by tracking people and objects across corporations.
In your home, you might have a thermostat or lights you can control from your phone, or a lock that automatically unlocks itself as your Bluetooth-enabled device approaches.
Tesla cars are noticeably Internet-connected in that you can download updates such as the relatively new self-driving highway features. In the future, driving experiences will increasingly be defined by monitors, counters, and gauges for sensors not only for physical quantities, but also for abstract entities like traffic or tourist attractions.
How do we move from digital experiences to physical (and vice versa)? The Internet has already demonstrated how near-instant massive exchanges of information can create systems and efficiencies previously undreamt of. By making devices that identify themselves online, update their own statuses, and even take action based on incoming information, we can create a world that is even more invisibly connected.
When you start developing experiences where people collaborate with robots, you can see robots as interfaces, similar to how web browsers provide a “screen” to the Internet.
Instead of being physically present yourself at some place, a robot can interact or capture information on your behalf. This goes beyond 90s-era webcams, to the traffic information streamed to your smartphone that enables you to use live data to find the shortest path to your destination. Or the robots around the world that record and track weather information across the globe.
Some argue that the extent to which connected robots have already changed our lives makes us as humans more “cyborg” (cybernetic organism) than pure human (see Figure 14-1).
It is hard to imagine modern life without the connected robotic technologies that enable it. From our smartphones, to our laptops, to the many connected devices that surround us, we depend on technology for communication, collaboration, and knowledge. This idea is promulgated and explored under the term cyborg anthropology, which you can read about at http://cyborganthropology.com/Main_Page.
Extending your senses through a few robots is just the beginning. From this foundation will emerge the possibility of interaction between hundreds or thousands of devices that may help us to find lost items, track patients in hospitals, improve transport, or enhance food production on farms. Enabling robots to connect and interact with one another can lead to enormous systems.
Imagine inputs from tiny robots collecting data of all sorts, from all sorts of places, and storing it for tracking, analytics, and ad targeting. In its most extreme expression, this is the vision of the IoT as a web of data and devices, resulting in complete information about our world.
Besides capturing data (inputs from devices and robots), the IoT provides new systems for “outputs,” tailored experiences of your environment.
One example of this is the weather clock tempescope, which visualizes weather forecasts from the Internet.
Another example might be a drone that knows how to avoid obstacles as identified by an online database.
Products with Internet access will shape new kinds of homes too (Amazon Dash buttons come to mind). The smart home will include devices that sense everything from who should and should not be present on the premises to the number of eggs in the refrigerator—and does something about it.
New kinds of environments can bring intangible information to life as physical objects in the environment. A simple conception of this is the buzz and ding of your phone when you receive a text message, a notification, or a new retweet. Another example is the Doppel watch, a new kind of wristband as shown in Figure 14-3. The watch sends a rhythmic pulse to your wrist to mimic a heartbeat-like sensation. The effect could be calming or enlivening, similar to listening to music.
Connecting personal environments and contexts would not only be interesting for economic purposes, but also for political and artistic purposes. One project by maker/artist Bilal Ghalib made the tragedy of war visceral with Internet-connected bracelets. A brief flash of heat would encircle your wrist when a bombing occurred, hopefully inspiring you to perform peaceful acts.
Environments with inputs and outputs also influence the design of journeys within spaces. Not so long ago, you would navigate in a city or country with the help of paper maps. Modern maps are embedded in their environments themselves. For example, in ski areas or event arenas, there are environments that respond to data from sensors and can use LEDs to show on a laser-cut map which of the local ski routes or paths are suggested in real time.
In addition, information can be broadcast about particular spaces. For example, in a museum, an app might send you information about particular paintings you have viewed during the course of your visit.
Another nice example based on Bluetooth is the Tile device. With Tile, you can easily detect where you put your car, a bag, or your keys in the range of 20–50 meters.
How can connecting everything change the way we work and create? Our ability to make efficient systems hinges on two things: our knowledge about the system, and our ability to alter that system. Robots, as devices with sensors and actuators, give us great power to learn about and direct the course of large-scale systems.
More localized connected sensors means a higher resolution of location-specific information, allowing us to create more complete maps in real time and work with better information. More localized connected actuators provide the ability to be more responsive and precise. Put together, a network of many sensing and actuating robots allows us to gather and immediately act on information in many places at once. Their connection to the Internet gives us logging as a bonus: insight into the system and how we affect it, so we can optimize and improve. The connection can also give us insights from outside the system.
For example, imagine a field of food crops. The status quo is to standardize watering across large areas, usually without much specific insight into the system. But now imagine that there are hundreds of soil moisture sensors evenly distributed across the field. These can report back the soil moisture as a map of which areas might be under- or overwatered. Over time, they can also provide a farmer with knowledge of seasonal patterns specific to the land.
Imagine now that the distributed connected robots have not just sensors on them, but sprinkler heads. Instead of just providing the farmer with information, the robots (complete with actuators) can regulate the watering in-time and in-place according to set parameters. This lets you produce crops more effectively with less runoff.
Finally, imagine these sensing, actuating, crop-watering robots not just regulating and reporting back, but leveraging inputs from outside the system. The robots are connected to the Internet, and so have access to weather APIs. Because of this, robots can use the prescience of meteorologists to make smart decisions on how to act, rather than using only regionalized data about the soil’s current state. If it will rain tomorrow, the sprinkler heads could be programmed not to go off, even in dry areas.
We now have much more information about the system, performing more optimally in terms of regulating plant needs, and we are using our water resources more efficiently.
The example can be further expanded: what if the sensors and actuators also worked with pests and pesticides? What about soil acidity? Fertilizer needs?
But expanding beyond the agricultural sphere, similar principles can be applied to many large-scale processes. In a factory, for instance, we have much actuator automation, but rarely do we have good real-time sensory logging. In a warehouse, we traditionally use humans to count inventory, but what if each item in the warehouse already knew what and where it was? That’s instant inventory, and a history associated with each item.
There is something wonderful about this technology, but also something frightening. Intelligent monitoring and dispensation of water from hundreds of individual nodes could have a major impact on our water usage. But the energy and materials used to make these hundreds of robots (as well as the waste once the electronics degrade) could also have a major negative environmental impact. And in human-oriented systems, privacy issues are immediately relevant. Where do we find the tipping point in these technologies? As technologists, it is our duty to consider the full impact of the technologies we create, with an eye toward building ethically and sustainably.