Trends Identified

Recyclable thermoset plastics
Plastics are divided into thermoplastics and thermoset plastics. The former can be heated and shaped many times, and are ubiquitous in the modern world, comprising everything from children’s toys to lavatory seats. Because they can be melted down and reshaped, thermoplastics are generally recyclable. Thermoset plastics however can only be heated and shaped once, after which molecular changes mean that they are “cured”, retaining their shape and strength even when subject to intense heat and pressure.Due to this durability, thermoset plastics are a vital part of our modern world, and are used in everything from mobile phones and circuit boards to the aerospace industry. But the same characteristics that have made them essential in modern manufacturing also make them impossible to recycle. As a result, most thermoset polymers end up as landfill. Given the ultimate objective of sustainability, there has long been a pressing need for recyclability in thermoset plastics. In 2014 critical advances were made in this area, with the publication of a landmark paper in the journal Science announcing the discovery of new classes of thermosetting polymers that are recyclable. Called poly(hexahydrotriazine)s, or PHTs, these can be dissolved in strong acid, breaking apart the polymer chains into component monomers that can then be reassembled into new products. Like traditional unrecyclable thermosets, these new structures are rigid, resistant to heat and tough, with the same potential applications as their unrecyclable forerunners. Although no recycling is 100% efficient, this innovation – if widely deployed – should speed up the move towards a circular economy with a big reduction in landfill waste from plastics. We expect recyclable thermoset polymers to replace unrecyclable thermosets within five years, and to be ubiquitous in newly manufactured goods by 2025.
2015
Top 10 emerging technologies of 2015
World Economic Forum (WEF)
Precise genetic-engineering techniques
Conventional genetic engineering has long caused controversy. However, new techniques are emerging that allow us to directly “edit” the genetic code of plants to make them, for example, more nutritious or better able to cope with a changing climate. Currently, the genetic engineering of crops relies on the bacterium agrobacterium tumefaciens to transfer desired DNA into the target genome. The technique is proven and reliable, and despite widespread public fears, there is a consensus in the scientific community that genetically modifying organisms using this technique is no more risky than modifying them using conventional breeding. However, while agrobacterium is useful, more precise and varied genome-editing techniques have been developed in recent years.These include ZFNs, TALENS and, more recently, the CRISPR-Cas9 system, which evolved in bacteria as a defence mechanism against viruses. CRISPR-Cas9 system uses an RNA molecule to target DNA, cutting to a known, user-selected sequence in the target genome. This can disable an unwanted gene or modify it in a way that is functionally indistinguishable from a natural mutation. Using “homologous recombination”, CRISPR can also be used to insert new DNA sequences, or even whole genes, into the genome in a precise way. Another aspect of genetic engineering that appears poised for a major advance is the use of RNA interference (RNAi) in crops. RNAi is effective against viruses and fungal pathogens, and can also protect plants against insect pests, reducing the need for chemical pesticides. Viral genes have been used to protect papaya plants against the ringspot virus, for example, with no sign of resistance evolving in over a decade of use in Hawaii. RNAi may also benefit major staple-food crops, protecting wheat against stem rust, rice against blast, potato against blight and banana against fusarium wilt. Many of these innovations will be particularly beneficial to smaller farmers in developing countries. As such, genetic engineering may become less controversial, as people recognize its effectiveness at boosting the incomes and improving the diets of millions of people. In addition, more precise genome editing may allay public fears, especially if the resulting plant or animal is not considered transgenic because no foreign genetic material is introduced. Taken together, these techniques promise to advance agricultural sustainability by reducing input use in multiple areas, from water and land to fertilizer, while also helping crops to adapt to climate change.
2015
Top 10 emerging technologies of 2015
World Economic Forum (WEF)
Additive manufacturing
As the name suggests, additive manufacturing is the opposite of subtractive manufacturing. The latter is how manufacturing has traditionally been done: starting with a larger piece of material (wood, metal, stone, etc), layers are removed, or subtracted, to leave the desired shape. Additive manufacturing instead starts with loose material, either liquid or powder, and then builds it into a three-dimensional shape using a digital template. 3D products can be highly customized to the end user, unlike mass-produced manufactured goods. An example is the company Invisalign, which uses computer imaging of customers’ teeth to make near-invisible braces tailored to their mouths. Other medical applications are taking 3D printing in a more biological direction: by directly printing human cells, it is now possible to create living tissues that may find potential application in drug safety screening and, ultimately, tissue repair and regeneration. An early example of this bioprinting is Organovo’s printed liver-cell layers, which are aimed at drug testing, and may eventually be used to create transplant organs. Bioprinting has already been used to generate skin and bone, as well as heart and vascular tissue, which offer huge potential in future personalized medicine. An important next stage in additive manufacturing would be the 3D printing of integrated electronic components, such as circuit boards. Nano-scale computer parts, like processors, are difficult to manufacture this way because of the challenges of combining electronic components with others made from multiple different materials. 4D printing now promises to bring in a new generation of products that can alter themselves in response to environmental changes, such as heat and humidity. This could be useful in clothes or footwear, for example, as well as in healthcare products, such as implants designed to change in the human body. Like distributed manufacturing, additive manufacturing is potentially highly disruptive to conventional processes and supply chains. But it remains a nascent technology today, with applications mainly in the automotive, aerospace and medical sectors. Rapid growth is expected over the next decade as more opportunities emerge and innovation in this technology brings it closer to the mass market.
2015
Top 10 emerging technologies of 2015
World Economic Forum (WEF)
Emergent artificial intelligence
Artificial intelligence (AI) is, in simple terms, the science of doing by computer the things that people can do. Over recent years, AI has advanced significantly: most of us now use smartphones that can recognize human speech, or have travelled through an airport immigration queue using image-recognition technology. Self-driving cars and automated flying drones are now in the testing stage before anticipated widespread use, while for certain learning and memory tasks, machines now outperform humans. Watson, an artificially intelligent computer system, beat the best human candidates at the quiz game Jeopardy. Artificial intelligence, in contrast to normal hardware and software, enables a machine to perceive and respond to its changing environment. Emergent AI takes this a step further, with progress arising from machines that learn automatically by assimilating large volumes of information. An example is NELL, the Never-Ending Language Learning project from Carnegie Mellon University, a computer system that not only reads facts by crawling through hundreds of millions of web pages, but attempts to improve its reading and understanding competence in the process in order to perform better in the future. Like next-generation robotics, improved AI will lead to significant productivity advances as machines take over – and even perform better – at certain tasks than humans. There is substantial evidence that self-driving cars will reduce collisions, and resulting deaths and injuries, from road transport, as machines avoid human errors, lapses in concentration and defects in sight, among other problems. Intelligent machines, having faster access to a much larger store of information, and able to respond without human emotional biases, might also perform better than medical professionals in diagnosing diseases. The Watson system is now being deployed in oncology to assist in diagnosis and personalized, evidence-based treatment options for cancer patients. Long the stuff of dystopian sci-fi nightmares, AI clearly comes with risks – the most obvious being that super-intelligent machines might one day overcome and enslave humans. This risk, while still decades away, is taken increasingly seriously by experts, many of whom signed an open letter coordinated by the Future of Life Institute in January 2015 to direct the future of AI away from potential pitfalls. More prosaically, economic changes prompted by intelligent computers replacing human workers may exacerbate social inequalities and threaten existing jobs. For example, automated drones may replace most human delivery drivers, and self-driven short-hire vehicles could make taxis increasingly redundant.On the other hand, emergent AI may make attributes that are still exclusively human – creativity, emotions, interpersonal relationships – more clearly valued. As machines grow in human intelligence, this technology will increasingly challenge our view of what it means to be human, as well as the risks and benefits posed by the rapidly closing gap between man and machine.
2015
Top 10 emerging technologies of 2015
World Economic Forum (WEF)
Distributed manufacturing
Distributed manufacturing turns on its head the way we make and distribute products. In traditional manufacturing, raw materials are brought together, assembled and fabricated in large centralized factories into identical finished products that are then distributed to the customer. In distributed manufacturing, the raw materials and methods of fabrication are decentralized, and the final product is manufactured very close to the final customer. In essence, the idea of distributed manufacturing is to replace as much of the material supply chain as possible with digital information. To manufacture a chair, for example, rather than sourcing wood and fabricating it into chairs in a central factory, digital plans for cutting the parts of a chair can be distributed to local manufacturing hubs using computerized cutting tools known as CNC routers. Parts can then be assembled by the consumer or by local fabrication workshops that can turn them into finished products. One company already using this model is the US furniture company AtFAB. Current uses of distributed manufacturing rely heavily on the DIY “maker movement”, in which enthusiasts use their own local 3D printers and make products out of local materials. There are elements of open-source thinking here, in that consumers can customize products to their own needs and preferences. Instead of being centrally driven, the creative design element can be more crowdsourced; products may take on an evolutionary character as more people get involved in visualizing and producing them. Distributed manufacturing is expected to enable a more efficient use of resources, with less wasted capacity in centralized factories. It also lowers the barriers to market entry by reducing the amount of capital required to build the first prototypes and products. Importantly, it should reduce the overall environmental impact of manufacturing: digital information is shipped over the web rather than physical products over roads or rails, or on ships; and raw materials are sourced locally, further reducing the amount of energy required for transportation. If it becomes more widespread, distributed manufacturing will disrupt traditional labour markets and the economics of traditional manufacturing. It does pose risks: it may be more difficult to regulate and control remotely manufactured medical devices, for example, while products such as weapons may be illegal or dangerous. Not everything can be made via distributed manufacturing, and traditional manufacturing and supply chains will still have to be maintained for many of the most important and complex consumer goods. Distributed manufacturing may encourage broader diversity in objects that are today standardized, such as smartphones and automobiles. Scale is no object: one UK company, Facit Homes, uses personalized designs and 3D printing to create customized houses to suit the consumer. Product features will evolve to serve different markets and geographies, and there will be a rapid proliferation of goods and services to regions of the world not currently well served by traditional manufacturing.
2015
Top 10 emerging technologies of 2015
World Economic Forum (WEF)
‘Sense and avoid’ drones
Unmanned aerial vehicles, or drones, have become an important and controversial part of military capacity in recent years. They are also used in agriculture, for filming and multiple other applications that require cheap and extensive aerial surveillance. But so far all these drones have had human pilots; the difference is that their pilots are on the ground and fly the aircraft remotely.The next step with drone technology is to develop machines that fly themselves, opening them up to a wider range of applications. For this to happen, drones must be able to sense and respond to their local environment, altering their height and flying trajectory in order to avoid colliding with other objects in their path. In nature, birds, fish and insects can all congregate in swarms, each animal responding to its neighbour almost instantaneously to allow the swarm to fly or swim as a single unit. Drones can emulate this. With reliable autonomy and collision avoidance, drones can begin to take on tasks too dangerous or remote for humans to carry out: checking electric power lines, for example, or delivering medical supplies in an emergency. Drone delivery machines will be able to find the best route to their destination, and take into account other flying vehicles and obstacles. In agriculture, autonomous drones can collect and process vast amounts of visual data from the air, allowing precise and efficient use of inputs such as fertilizer and irrigation. In January 2014, Intel and Ascending Technologies showcased prototype multi-copter drones that could navigate an on-stage obstacle course and automatically avoid people who walked into their path. The machines use Intel’s RealSense camera module, which weighs just 8g and is less than 4mm thick. This level of collision avoidance will usher in a future of shared airspace, with many drones flying in proximity to humans and operating in and near the built environment to perform a multitude of tasks. Drones are essentially robots operating in three, rather than two, dimensions; advances in next-generation robotics technology will accelerate this trend. Flying vehicles will never be risk-free, whether operated by humans or as intelligent machines. For widespread adoption, sense and avoid drones must be able to operate reliably in the most difficult conditions: at night, in blizzards or dust storms. Unlike our current digital mobile devices (which are actually immobile, since we have to carry them around), drones will be transformational as they are self-mobile and have the capacity of flying in the three-dimensional world that is beyond our direct human reach. Once ubiquitous, they will vastly expand our presence, productivity and human experience.
2015
Top 10 emerging technologies of 2015
World Economic Forum (WEF)
Neuromorphic technology
Even today’s best supercomputers cannot rival the sophistication of the human brain. Computers are linear, moving data back and forth between memory chips and a central processor over a high-speed backbone. The brain, on the other hand, is fully interconnected, with logic and memory intimately cross-linked at billions of times the density and diversity of that found in a modern computer. Neuromorphic chips aim to process information in a fundamentally different way from traditional hardware, mimicking the brain’s architecture to deliver a huge increase in a computer’s thinking and responding power. Miniaturization has delivered massive increases in conventional computing power over the years, but the bottleneck of shifting data constantly between stored memory and central processors uses large amounts of energy and creates unwanted heat, limiting further improvements. In contrast, neuromorphic chips can be more energy efficient and powerful, combining data-storage and data-processing components into the same interconnected modules. In this sense, the system copies the networked neurons that, in their billions, make up the human brain. Neuromorphic technology will be the next stage in powerful computing, enabling vastly more rapid processing of data and a better capacity for machine learning. IBM’s million-neuron TrueNorth chip, revealed in prototype in August 2014, has a power efficiency for certain tasks that is hundreds of times superior to a conventional CPU (Central Processing Unit), and more comparable for the first time to the human cortex. With vastly more compute power available for far less energy and volume, neuromorphic chips should allow more intelligent small-scale machines to drive the next stage in miniaturization and artificial intelligence. Potential applications include: drones better able to process and respond to visual cues, much more powerful and intelligent cameras and smartphones, and data-crunching on a scale that may help unlock the secrets of financial markets or climate forecasting. Computers will be able to anticipate and learn, rather than merely respond in pre-programmed ways.
2015
Top 10 emerging technologies of 2015
World Economic Forum (WEF)
Digital genome
While the first sequencing of the 3.2 billion base pairs of DNA that make up the human genome took many years and cost tens of millions of dollars, today your genome can be sequenced and digitized in minutes and at the cost of only a few hundred dollars. The results can be delivered to your laptop on a USB stick and easily shared via the internet. This ability to rapidly and cheaply determine our individual unique genetic make-up promises a revolution in more personalized and effective healthcare. Many of our most intractable health challenges, from heart disease to cancer, have a genetic component. Indeed, cancer is best described as a disease of the genome. With digitization, doctors will be able to make decisions about a patient’s cancer treatment informed by a tumour’s genetic make-up. This new knowledge is also making precision medicine a reality by enabling the development of highly targeted therapies that offer the potential for improved treatment outcomes, especially for patients battling cancer. Like all personal information, a person’s digital genome will need to be safeguarded for privacy reasons. Personal genomic profiling has already raised challenges, with regard to how people respond to a clearer understanding of their risk of genetic disease, and how others – such as employers or insurance companies – might want to access and use the information. However, the benefits are likely to outweigh the risks, because individualized treatments and targeted therapies can be developed with the potential to be applied across all the many diseases that are driven or assisted by changes in DNA.
2015
Top 10 emerging technologies of 2015
World Economic Forum (WEF)
Nanosensors and the Internet of Nanothings
The Internet of Things (IoT), built from inexpensive microsensors and microprocessors paired with tiny power supplies and wireless antennas, is rapidly expanding the online universe from computers and mobile gadgets to ordinary pieces of the physical world: thermostats, cars, door locks, even pet trackers. New IoT devices are announced almost daily, and analysts expected to up to 30 billion of them to be online by 2020. The explosion of connected items, especially those monitored and controlled by artificial intelligence systems, can endow ordinary things with amazing capabilities—a house that unlocks the front door when it recognizes its owner arriving home from work, for example, or an implanted heart monitor that calls the doctor if the organ shows signs of failing. But the real Big Bang in the online universe may lie just ahead. Scientists have started shrinking sensors from millimeters or microns in size to the nanometer scale, small enough to circulate within living bodies and to mix directly into construction materials. This is a crucial first step toward an Internet of Nano Things (IoNT) that could take medicine, energy efficiency, and many other sectors to a whole new dimension. Some of the most advanced nanosensors to date have been crafted by using the tools of synthetic biology to modify single-celled organisms, such as bacteria. The goal here is to fashion simple biocomputers that use DNA and proteins to recognize specific chemical targets, store a few bits of information, and then report their status by changing color or emitting some other easily detectable signal. Synlogic, a start-up in Cambridge, Mass., is working to commercialize computationally enabled strains of probiotic bacteria to treat rare metabolic disorders. Beyond medicine, such cellular nanosensors could find many uses in agriculture and drug manufacturing. Many nanosensors have also been made from non-biological materials, such as carbon nanotubes, that can both sense and signal, acting as wireless nanoantennas. Because they are so small, nanosensors can collect information from millions of different points. External devices can then integrate the data to generate incredibly detailed maps showing the slightest changes in light, vibration, electrical currents, magnetic fields, chemical concentrations and other environmental conditions. The transition from smart nanosensors to the IoNT seems inevitable, but big challenges will have to be met. One technical hurdle is to integrate all the components needed for a self-powered nanodevice to detect a change and transmit a signal to the web. Other obstacles include thorny issues of privacy and safety. Any nanodevices introduced into the body, deliberately or inadvertently, could be toxic or provoke immune reactions. The technology could also enable unwelcome surveillance. Initial applications might be able to avoid the most vexing issues by embedding nanosensors in simpler, less risky organisms such as plants and non-infectious microorganisms used in industrial processing. When it arrives, the IoNT could provide much more detailed, inexpensive, and up-to-date pictures of our cities, homes, factories—even our bodies. Today traffic lights, wearables or surveillance cameras are getting connected to the Internet. Next up: billions of nanosensors harvesting huge amounts of real-time information and beaming it up to the cloud.
2016
Top 10 Emerging Technologies of 2016
World Economic Forum (WEF)
Next Generation Batteries
Solar and wind power capacity have been growing at double-digit rates, but the sun sets, and the wind can be capricious. Although every year wind farms get larger and solar cells get more efficient, thanks to advances in materials such as perovskites, these renewable sources of energy still satisfy less than five percent of global electricity demand. In many places, renewables are relegated to niche roles because of the lack of an affordable, reliable technology to store the excess energy that they make when conditions are ideal and to release the power onto the grid as demand picks up. Better batteries could solve this problem, enabling emissions-free renewables to grow even faster—and making it easier to bring reliable electricity to the 1.2 billion people who currently live without it. Within the past few years, new kinds of batteries have been demonstrated that deliver high enough capacity to serve whole factories, towns, or even “mini-grids” connecting isolated rural communities. These batteries are based on sodium, aluminium or zinc. They avoid the heavy metals and caustic chemicals used in older lead-acid battery chemistries. And they are more affordable, more scalable, and safer than the lithium batteries currently used in advanced electronics and electric cars. The newer technology is much better suited to support transmissions systems that rely heavily on solar or wind power. Last October, for example, Fluidic Energy announced an agreement with the government of Indonesia to deploy 35 megawatts of solar panel capacity to 500 remote villages, electrifying the homes of 1.7 million people. The system will use Fluidic’s zinc-air batteries to store up to 250 megawatt-hours of energy in order to provide reliable electricity regardless of the weather. In April, the company inked a similar deal with the government of Madagascar to put 100 remote villages there on a solar-powered mini-grid backed by zinc-air batteries. For people who currently have no access to the grid—no light to work by at night, no Internet to mine for information, no power to do the washing or to irrigate the crops—the combination of renewable generation and grid-scale batteries is utterly transformative, a potent antidote for poverty. But better batteries also hold enormous promise for the rich world as it struggles to meet the formidable challenge of removing most carbon emissions from electricity generation within the next few decades—and doing so at the same time that demand for electricity is growing. The ideal battery is not yet in hand. The new technologies have plenty of room for further improvement. But until recently, advances in grid-scale batteries had been few and far between. So it is heartening to see the pace of progress quickening.
2016
Top 10 Emerging Technologies of 2016
World Economic Forum (WEF)