Shale gas

Drilling for a technological revolution

Régis de Closets
11 June 2014

2 min read

The contribution of shale gas to the US economic recovery is giving rise to dreams of “cheap energy” on a world scale. But experts agree that extending this boom requires the development of more efficient and environmentally safe recovery solutions.

Shale gas is booming in the United States, where more than 2 million wells have been drilled since 2007. The sudden jump in supply has made gas in the US three times cheaper than in Europe and four times cheaper than in Japan. Proponents estimate global development of shale gas could quintuple world gas reserves. But experts warn that the way shale gas is being extracted will make its value short-lived.

In September 2013, a Stanford University study concluded that the long-term impact of shale gas on the American economy will be “marginal.” A report by French experts at the Institute for Sustainable Development and International Relations (IDDRI) criticized the gold-rush mentality that is putting short-term profits ahead of long-term production.


The US shale gas boom is driven by a multitude of small, often speculative investors; on average, 80% of a well’s resources are exhausted after just two years, and developers are focused on keeping their costs low. As a result, the hydraulic fracturing process has evolved little since American engineer George Mitchell’s pioneering work in 1991.


of a shale gas well’s resources are exhausted after just two years.

 “It’s much trickier than it seems because we need to drill in areas that often stretch over several kilometers and are located under 30 or 40 layers of rock,” said Atanu Basu, the creator of Ayata, based in Austin, Texas (USA), which employs big-data technology to analyze underlying rock layers before drilling begins. “Today, an estimated 80% of North American shale gas output comes from just 20% of fracking stages.” Catherine Gautier is a geology professor at the University of California in Santa Barbara and co-author of Shale Gas: New Eldorado or Dead End? “The operating model currently used in the USA must be seen against a very favorable backdrop of low costs, spacious drilling areas and conciliatory regulations,” Gautier said.

The difficulty of exporting the US model is evident in Poland, where the global oil and gas company Total pulled out of its Polish shale gas project in April 2014. It’s the latest example of uncertainty about fracking’s future.


Opponents contend that fracking pollutes the soil and leaks methane, believed to contribute to global warming. “Leaks of 4% or 5% throughout the production cycle could make gas extraction as harmful as coal,” Gautier said. “According to some studies, today’s fugitive emissions could already reach between 4.6% and 7.9% over a well’s entire lifecycle.” In a world hungry for energy, however, fracking is likely to continue. Therefore, scientists are seeking better production methods to reduce its impacts. Alternatives do exist, but research results remain mixed. Fluropropane is controversial.

Thermal and electrical fracturing are still in the laboratory. “If we wish to make shale gas a sustainable lynchpin in our energy mix, we must develop more reliable and environmentally safe technological solutions,” said Jean-Louis Fellous, an atmospheric physicist who is Gautier’s co-author. Such research will require the combined effort of public and industrial leaders, Fellous said. “Today, extraction is an enormous issue, and the universities do not have the necessary financing.

”Thierry Bros, senior European gas and LNG analyst for French bank Société Générale and author of After the US Shale Gas Revolution, believes regulation could help mobilize industrialists. “Regulation is too coercive to them,” Bros said.

“It should encourage transparency and greater control of their extraction process. If Europe takes up the challenge, it could promote a real alternative shale gas production model and export this know-how to other markets.” When that day comes, shale gas will play a more valuable role in a global, sustainable-energy future. ◆

Régis de Closets is a French science journalist who has produced documentaries for French television and written for Le Figaro, Le Nouvel Economist, and the international management journal Outlook.

Eco ships

Will lower operating costs offset higher construction costs?

Gregory R. Trauthwein

5 min read

SEEMP. ECA. EEDI. BWTS. Regardless of the acronym, these regulations spell “m-o-n-e-y” for the owners and operators of commercial ships. Although the new rules are intended to increase ship efficiency and reduce greenhouse gas emissions, ship experts debate whether spending more on ship design and construction can be recouped through lower operating costs.

According to the World Trade Organization, 90% of the world’s cargo travels via ship, requiring a well-choreographed dance of nearly 60,000 vessels. But profound consolidation of maritime operations has occurred in the past 20 years, led by companies such as Denmark’s A.P. Møller – Mærsk A/S, which operates nearly 1,300 vessels across six sectors with a 2013 profit of US$3.8 billion on revenues of US$47.3 billion.

This trend toward fewer, larger operators is driven in part by the global financial collapse of 2008, as well as an upsurge in regulations aimed at making commercial ships more fuel-efficient and reducing their environmental impacts. These regulations, developed by the International Maritime Organization, include the Energy Efficiency Design Index (EEDI), the Ship Energy Efficiency Management Plan (SEEMP); Emission Control Areas (ECA); and new rules on Ballast Water Treatment Systems (BWTS).


The global maritime fleet’s capacity has nearly doubled in the past decade, from 805 million deadweight ton (dwt) in 2004 to 1,519 million dwt in 2014, according to Norwegian shipbroker RS Platou, a capacity that could triple by 2024. With high fuel prices, the Baltic and International Maritime Council (BIMCO) in Bagsværd, Denmark, sees a marked trend toward designing and building more fuel-efficient ships. These “ECO” ships are cost-effective to operate but more expensive to purchase, with more cargo capacity thanks to new hull designs, optimized machinery and advanced monitoring and control equipment.

“Without a doubt, the biggest change we’ve seen has been the shift toward more efficient and greener ships,” said Noboru Ueda, chairman and president of the Japanese classification society ClassNK. “It has only been a few years since the industry really started to work toward innovating more efficient vessels, but we’re already seeing very impressive improvements in terms of improved fuel efficiency and EEDI scores. At the same time, new greenhouse gas reduction technologies, like air lubrication, and new low-friction paints are just entering commercial use.”



Market leader Maersk led the industry with its Triple-E class (where Triple-E for Economy of scale, Energy efficiency and Environmentally improved), a 20-ship fleet of the largest container vessels ever built. These 18,000 twenty-foot equivalent unit (TEU) behemoths are 400 meters (1,312 feet) long, 59 meters (193 feet) wide and 73 meters (239 feet) tall, consume 55,000 tons of steel per ship, and cost an estimated US$190 million each.

Compared to the 13,000 TEU ships of its competitors, Maersk projects that its 18,000 TEU Triple-E ships will burn approximately 35% less fuel per container moved.

“The most important technology to make a ship efficient is the engine,” said Jose Femenia, former head of Engineering and director of the Master of Marine Engineering Program at the US Merchant Marine Academy, now working as a consultant to New York-based energy advisory firm utiliVisor. “It should be a good, high-quality power plant with low inherent fuel consumption and a reasonable maintenance requirement.”


Emission-reduction mandates are being phased in through 2020. As limits on CO2, NOx, SOx and particulate matter (PM) increase, ship owners must invest in new technology to comply.

For new ships constructed on or after January 1, 2016, Tier III engine regulations require 80% reductions in NOx emissions. Compliance with SOx emission limits will force ship owners to examine every detail, from lubrication systems to fuel choices and exhaust gas processing.

Vessel owners running on tight margins are understandably resistant. “As a CEO, I predict that we will have to raise cargo rates by as much as 35% to manage the unprecedented fuel price premium of US$400 per metric ton,” Rod Jones, president and CEO of the CSL Group of Montreal, Canada, testified to the US Congress on the North American Emission Control Area (ECA) in March 2014.

But Tom Crowley, chairman and CEO of Crowley Maritime, a US$2 billion company with a fleet of 200 US flag vessels, accepts the regulations as a cost of businesses. “I think our industry has to take a more proactive role in cleaning ourselves up,” he told Maritime Professional in 2014. “It is going to be expensive and the cost of transportation is going to increase, but I reason the world understands that. They would rather have higher transportation costs than the environmental consequences.”


The benefits of SEEMP are equally controversial. “The concept of SEEMP (Ship Energy Efficiency Management Plan) is a voluntary tool to assist ship operators with energy efficiency management,” said Peter Sand, chief shipping analyst for BIMCO. “Operators, however, choose the scope, size and price of the tool they want to implement. No system fits all.”

All ships are required to carry a SEEMP, Sand explained, but neither administrators nor port state control can check that the SEEMP adheres to IMO guidelines or that a ship is following its SEEMP plan. “Enforcement powers are limited to verifying that a SEEMP exists,” Sand said. “Even here, its continuing existence is not directly verified. The presence of SEEMP is attested by means of the International Energy Efficiency (IEE) certificate, which is issued once for the life of the ship.”



If a SEEMP is developed and followed, however, performance will improve, Femenia said. “Depending on the vessel and the owner, the room for improvement varies widely. For example, if a ship has a very large hotel load, like a cruise ship or a containership carrying reefer containers, I found that it is generally possible to squeeze out 1%-3% in efficiency, which is a significant amount of money.”

Robert Kunkel, president of Alternative Marine Technologies in Norwalk, Connecticut (USA), however, believes high fuel prices are already doing the job of SEEMP. “The prudent owner has been watching his fuel consumption and efficiency for years prior to any regulation,” he said.


Kunkel believes that new power technologies will accomplish more. “If you want to reduce your fuel consumption, if you want to reduce your emissions, don’t run your engines,” Kunkel said. “Battery power and hybrid systems will give to you that opportunity.”

Kunkel’s company has supervised the construction of modern commercial ECO tankers at Hyundai Heavy Industries in South Korea, among other shipyards. Improving ships’ fuel efficiency and reducing emissions can add 20% to 25% to the cost of a ship, he said. “But what everyone needs to understand is that the shipbuilder and the ship designer throw in close to 20%-25% of risk liability, because they are dealing with new systems that they have not installed, which can cause production and delivery delays.” ◆


A brief guide to the most common acronyms for new ship-efficiency regulations:

EEDI: The Energy Efficiency Design Index (EEDI) is a performance-based mechanism that requires ships to achieve a certain energy/efficiency ratio, but leaves the choice of technologies to the owner/operator. Took effect January 1, 2013.

SEEMP: The Ship Energy Efficiency Management Plan (SEEMP) establishes a process for operators to improve the energy efficiency of ships. Took effect January 1, 2013.

ECA: Emission Control Areas set different ship emission levels based on where they operate. Europe and North America have the most stringent requirements.

BWTS/BWM: Refers to Ballast Water Treatment Systems (BWTS) or Ballast Water Management (BWM). Considered the most expensive ship refit proposal ever made, with an estimated cost of US$2 million to US$3 million per ship. Final rule is pending.

Polar shortcuts

As ice melts, shippers plan for Arctic routes

Gregory R. Trauthwein

5 min read

As the polar ice cap thins and recedes, ship owners eye a trek through the Arctic as a means to slash fuel costs, emissions and time in transit. Though Arctic ship traffic is increasing, significant logistical, technical and political hurdles remain before the promise of the Northern Sea Route significantly impacts global shipping patterns.

With the thinning polar ice cap creating more navigable routes than ever before, ship owners are carefully weighing the financial rewards versus the technical, logistical, legal and political risks of shipping via the shorter Northern Sea Route (NSR).

“This is an extremely dynamic issue, but the main point is the time savings, which equates to saving in bunker fuels and charter hire,” said Felix H. Tschudi, chairman and fourth-generation owner of the Tschudi Group of Oslo, Norway, which specializes in shipping, offshore and logistics services. Tschudi also is chairman of the Center for High North Logistics, a nonprofit research foundation focused on transportation solutions in the Arctic.


Though the savings in time, fuel and emissions are substantial, Arctic routes come with added risk.

“Ice itself is still a risk,” said Mikko Niini, former CEO and now an adviser to Aker Arctic of Helsinki, Finland, which operates the world’s only privately owned ice-testing facility for shipping. “It is a technical risk for the hull and the machinery. And while everyone talks about the polar melt, the area open for shipping is short already.”



In addition to ice, Arctic shippers must plan for the region’s lack of infrastructure, including port facilities and emergency response and remediation. Poor satellite coverage complicates communications and contributes to limited meteorological data. A lack of detailed ocean charts for the region, coupled with frequently dense fog, makes navigation difficult. Cold weather and long polar days can disrupt sleep, increasing the risk of human error.

Ships reinforced against ice damage also can cost 50% more than similarly sized commercial ships built for non-Arctic routes. “The ice cover has been reduced radically, but this could simply be a cycle, and it could turn again,” Niini said. The extra weight of ships designed for the Arctic makes them uneconomical for other routes, so re-freezing could render them useless.

“At the end of the day, it really comes down to dollars per ton delivered,” Tschudi said. “The relative savings of using the NSR depend on the overall health of the freight market.” For high-value cargoes in high-freight markets such as liquefied natural gas (LNG), the additional cost of ice-reinforced vessels makes sense, Tschudi said. For the low-rate bulk carrier sector, the economics of risking an Arctic route are less compelling.


One proven way for shippers to mitigate Arctic risks is to partner with Atomflot, the Russian icebreaking and escort authority.

Tschudi considers Atomflot an indispensable resource. “The icebreaking capacity is not the main thing; for me it is the escort service,” he said. “They can tow, they have a hospital, and if there is any mechanical failure you have them nearby to assist. My best advice: Listen to the advice of Atomflot and the Northern Sea Route Administration. It is also a good idea to have a Russian-speaking ice pilot onboard to facilitate communications.”

Niini concurs. “It is a safety escort, and they have fees on these operations,” he said. “If you’re bold you could do it outside of the Russian influence. But typically, anyone who has used this passage will go through the Russian route.”

Image © DNV GL


The diverse and dynamic nature of the Arctic dictates caution and collaboration. “I would make investments that would give you opportunity to go there, but not necessarily a very large fleet,” Niini said. “From the technical side, everything goes more safely and efficiently when you go step-by-step.”

Tschudi believes that one key to long-term success of the route will be to identify and secure cargoes. “If you could identify cargoes that could generate freight income on the return leg to Europe, this would make the route almost unbeatable,” he said.

Collaboration will become crucial as national, corporate and scientific interests collide. Paul Arthur Berkman, Fulbright Distinguished Scholar and research professor in the Bren School of Environmental Science and Management at the University of California, Santa Barbara (UCSB), contends that the challenges are too large for any one entity to manage.

“The solutions for safe ship operations will involve public-private partnerships, as the solutions are beyond the governments themselves to implement in isolation,” Berkman said. “It will include many elements, such as the International Maritime Organization’s Polar Code, training and certification of the seafarers, the insurance industry, companies with fleets to operate effectively in the Arctic, classification societies and state responses.”

His colleague at UCSB, Oran Young, is a renowned Arctic expert and a leader in the fields of international governance and environmental institutions. “My sense is that we are beyond the visionary stage,” Young said. “We are now moving into the stage of realism where you start thinking of the concrete, substantive bottom-line types of considerations,” including the need for return cargo, new ship designs, and research into insurance issues.

“There are estimates that global ship traffic will triple in the next 10 years,” Berkman said. “Proportionately, what is the increase in the Arctic? If we know that, we can begin to model what it might look like, which then translates into the infrastructure needed to support that increased traffic. We are really just at the beginning of this journey, and it will take a collective response to envision what this infrastructure for all of these commercial operations will look like across the 21st century.” ◆

Creating the Polar Code

While commercial activities in the Arctic continue to increase at a brisk pace, the Arctic and its future have come to the forefront of the International Maritime Organization (IMO), the specialized United Nations agency responsible for safety and security of shipping and the prevention of marine pollution in the region. IMO is developing a Polar Code to cover a wide range of design, construction, equipment, operational, training, search-and-rescue and environmental protection issues relevant to ships operating in polar waters. To date, the IMO has agreed to draft codes for safety and pollution-prevention treaties, as well as chapters relating to training and manning, fire protection, safety and life-saving appliances, and is working with other UN committees to implement them.

The Cost of Arctic Ships

Commercial ships are long-term investments, expected to operate 25 years or more, and costly to maintain. Ships built to withstand the Arctic’s rigors cost more still, with a heavier ship and higher fuel costs.

One of the most significant shipbuilding projects to support Arctic-route operations is Daewoo Shipbuilding and Marine Engineering’s construction of sixteen 300-meter (984 feet) Arctic LNG carriers with a capacity of 170,000 cubic meters (6 million cubic feet) each, being built in South Korea for operation beginning in 2016. “This is a real breakthrough in Arctic commercial transportation traffic, as this is the first project on a regular commercial traffic basis,” said Mikko Niini, senior adviser to Aker Arctic.

Siberia’s Yamal LNG project, which is ice-bound nine months of the year, is one of the largest industrial undertakings in the Arctic, with more than 200 wells. While the final cost of the ships is confidential, professional estimates are in the region of US$300 million each, approximately 50% more than the cost of a similarly sized, standard LNG carrier. In addition to more steel and power, several key ship systems must be winterized, including protection of deck equipment, ballast-tank heating, pre-heating of equipment and in-housing insulation.

Gregory R. Trauthwein is editor and associate publisher of Maritime Reporter & Engineering News ( and has covered the global maritime market for more than 20 years.

Watch the change in Arctic routes :

The OEM handoff

Shifting design responsibility puts greater burden on aerospace suppliers

Tony Velocci

2 min read

Large aerospace manufacturers are requiring small suppliers to share more of the business risks and financial investments associated with development and production programs – all in an environment that requires the highest levels of quality and reliability, meticulous adherence to promised dates of delivery and year-on-year cost reductions.

Aerospace suppliers are in the vortex of turbulent times and the speed and skill with which they adapt to the tumult will determine their fate.

Among the forces that will separate the leaders from the laggards are globalization, emerging competitors, growing price pressures, unprecedented commercial aircraft production rates, a protracted downturn in defense spending in mature markets and a restructuring of relationships between original equipment manufacturers (OEM) and lower-tier suppliers.

All capital goods industries rely on a global network of parts suppliers. Today, supplier-created content represents 50%-60% of the value of an aerospace system, according to Lockheed Martin Executive Chairman Bob Stevens, and he predicts that figure will continue to grow. Relentless price pressures from end-use customers, plus their demand for more affordable products, is driving large companies at or near the top of the supply chain to shift even more responsibility for innovation and productivity gains to the smaller, lower-tier builders of components and subsystems.


Supplier-created content represents 50%-60% of the value of an aerospace system.


“Suppliers are bearing an enormous amount of pressure today,” Stevens said, referring to the smaller companies that support the large OEMs that serve defense markets. In most regions, those markets are shrinking due to resource constraints.


To survive, much less thrive, experts say, lower-tier suppliers will need to make step-change improvements in managing operating costs, be more innovative and forge closer partnerships with their OEM customers.

“Companies that simplify processes, implement lean manufacturing principles and redesign infrastructure are the ones most likely to succeed,” said Scott Thompson, who leads the aerospace consulting practice of PricewaterhouseCoopers.

In parallel with shifting more responsibility to lower-tier suppliers, some OEMs are pursuing vertical integration to gain greater control over the quality of critical parts and acquire strategic technological expertise. In 2012, for example, General Electric purchased Italy’s Avo S.p.A., a major low-pressure turbine and gearbox supplier, and US-based Morris Technologies, a precision manufacturing company. Both acquisitions significantly expanded GE’s additive manufacturing capabilities.


One market force driving change above all others is the need for OEMs and their vendors to deliver more affordable products and get them into hands of aerospace customers faster.

Kent Statler, executive vice president and chief operating officer of Commercial Systems at Rockwell Collins, a provider of avionics and information technology systems based in Cedar Rapids, Iowa (USA), believes the key to improving cycle times is for OEMs to involve suppliers earlier in the product design process.



“Growing cost pressures and the logarithmic increase in complexity of air vehicles require closer risk-sharing partnerships between systems integrators and their suppliers to meet customer expectations,” Statler said. “For their part, suppliers have a responsibility to participate in the conceptual phase and contribute ideas.”

Michael Yates, president of Tactair Fluid Controls, a US-based manufacturer of hydraulic and pneumatic controls for the aerospace industry, said the trend requires a closer relationship between OEMs and their suppliers. “Lower-tier suppliers must understand their responsibilities, and that happens only when there is open communications,” he said. The stronger the relationship lower-tier suppliers can create with OEMs, the better the chances of the supply chain delivering the optimal customer experience. ◆

Defining values

Executive chairman piloted Lockheed Martin safely through turbulent times

Tony Velocci

5 min read

Lockheed Martin Executive Chairman Bob Stevens retired as chairman and CEO of the world’s largest aerospace company at the end of 2013, after 10 years at the helm. He leaves a legacy of leadership, innovation, and an astute sense for guiding a large enterprise through challenging times in the pursuit of excellence.

Lockheed Martin Executive Chairman Bob Stevens has a philosophy about optimizing the customer experience – regardless of the industry or the nature of the business – that goes like this: Listen more than you talk, develop a feel for what that experience should be, and share what you learn with your product design team.

Before his recent retirement as chairman and CEO of the world’s largest aerospace company, Stevens routinely encouraged his executive team to learn by going where the company’s customers have to apply themselves in order to succeed. “When you’re sitting in an office, you’re too insulated from the customer experience,” he said. “Leaders need to see the world through their customers’ eyes.”

Image © Lockheed Martin Corp.

Over the years, Stevens soared to the edge of space in a U-2 reconnaissance aircraft; flew “nap of the earth” at night, hugging the terrain, in an AH-64 Apache helicopter; visited with US soldiers and airmen deployed in Afghanistan, and catapulted off the deck of an aircraft carrier in an F/A-18 fighter.

“Every market segment has a customer community with certain interests that defines value in a specific way,” said Stevens, who also serves as lead director on the Board of Directors of Monsanto, a global agriculture company in St. Louis. “It is incumbent upon the leadership of an enterprise to understand what that value exchange is. Customer expectations and preferences can change pretty dramatically.”


Stevens has personal experience with dramatic changes in customer expectations. Lockheed Martin’s principal customer is the US Department of Defense (DoD), whose overwhelming preference for decades was for transformational, game-changing technologies. The DoD was so eager to field the most capable systems imaginable, it often pushed manufacturers to skip an entire product generation.

While government customers still want Lockheed Martin to invest heavily in research and development, DoD issued new marching orders to its industrial base in 2012. No longer able to afford the most elegant technology solutions to complex national-security challenges, DoD pressed its equipment suppliers to deliver more affordable products that could be fielded sooner.

Lockheed Martin and hundreds of other aerospace companies will follow this path for the foreseeable future. But retooling an enterprise to effectively respond to changes in customer demands, especially on this scale, “is a real challenge,” Stevens said.

Career-defining challenges became a hallmark for Stevens soon after he joined Lockheed Martin in 1996 as the president of the Air Traffic Management division. In his first three years with the company, he completed a series of difficult assignments that profoundly stretched his capabilities. It was all a warm-up for what followed in 1999, when he was appointed chief financial officer at a particularly challenging period in the company’s history.



After finalizing 14 mergers and acquisitions in rapid succession, Lockheed Martin faced an uncertain future. The company was carrying a heavy load of debt and suffered from a leadership vacuum, competing cultures and internal power struggles. The business portfolio was in strategic disarray, better-managed competitors were eroding sales and financial performance was anemic. Investor confidence sagged.

Within a year, Stevens put the company on a footing that helped restore outside confidence and lifted morale. Under his leadership, the company shed debt, tightened its focus on core competencies and strategically reshaped the business portfolio. Stevens implemented sweeping leadership-development practices that stretched across all business segments, from the Board of Directors to entry-level employees. He also established a climate in which a single culture could emerge and infused the enterprise with renewed competitive vigor.

“I could see the organization had gifted people and incredible potential,” he said, “and so it was a matter of shaping the architecture of the business model to allow the inherent value to flow to our customers and shareholders.”

Wolfgang H. Demisch, one of Wall Street’s most respected aerospace analysts from 1988 to 2002 and now a principal of New York-based Demisch Associates LLC, recalled Stevens’ predicament.

“He had a mess on his hands,” Demisch said. “Top management had no clear sense of direction, and Stevens created a sense of confidence internally and externally. Later, as CEO, Stevens reinforced the underpinnings of the company. He did the best job that could be done to enhance the franchise and ensure its future.”

During Stevens’ 10-year tenure, annual revenue grew 33%, earnings per share climbed nearly 200% and market capitalization expanded more than US$5 billion. Total shareholder return climbed 107%.


Being thrown into very challenging circumstances is always a learning experience, Stevens said, drawing a parallel with his stint as a young US Marine corporal. Stevens, the son of a steelworker, grew up just outside of Pittsburgh, Pennsylvania (USA), and enlisted in the Marines fresh out of high school.

“Nobody asks what you want to do,” he said of his enlistment. “You get an assignment, and you do the best you can. It’s not as though you don’t make mistakes; what’s important is that you learn from them.”

Stevens drew inspiration not just from the Marine Corps, but also from role models in industry. As a young manager at Fairchild Republic, a now-defunct aircraft manufacturer, the company president – Stevens’ manager – sent for Stevens at the end of an especially long day.

The conversation centered on Stevens’ attention to detail in a report he had written. Stevens felt sure his work reflected his best effort – and then his manager pointed out a misspelled word.

BOB STEVENS, Executive Chairman, Lockheed Martin (Image © Lockheed Martin Corp.)

“I want to rely on you, but what else might be wrong in this report?” the president challenged Stevens. After returning the document to Stevens, his boss wondered aloud whether he could ever count on him in the future. He advised Stevens to set a high standard for himself, because other people depended on his judgment. “It was a very profound experience that was instrumental in shaping my early thoughts about executive management behavior and how to get things done,” Stevens said.


By the time he arrived at Lockheed Martin, Stevens had a well-honed sense of how to assess a business situation and set goals. As a leader, he placed tremendous value on the customer experience and collaboration, noting that there are too many smart people to not take advantage of their unique perspectives in formulating a course of action.

He also placed tremendous value on developing leaders as the key to success. On his watch, Lockheed Martin reinvigorated the core values and ethics code of conduct applicable to every employee, regardless of rank. He created an executive diversity council to ensure every employee would have the opportunity to excel, built succession plans for every person and position, offered rotational assignments to shape leaders with greater breadth of experience, developed the company’s unique Full Spectrum Leadership model and built a permanent Center for Leadership Excellence at the headquarters.

“Our people are our future,” Stevens said. “We need leaders who understand the enterprise and set the highest ethical standards, and have an appreciation for how the world may change and how to direct the organization through it.”



William Greenwalt, a former Lockheed Martin executive and deputy under-secretary of defense for Industrial Policy in President George W. Bush’s administration, observed: “Bob has tremendous integrity, sees the art of the possible and is able to find the right path.” Now a visiting fellow at the American Enterprise Institute in Washington, Greenwalt concluded: “That Lockheed Martin has been able to do these things well is not only important for the business, but for the security and economic well-being of the United States and its allies.” ◆

Bob Stevens on developing leaders:

Car & driver, partners in safety

Toyota’s cautious vision of driving’s future

William J. Holstein

3 min read

While experts envision a near-future of driverless cars, Toyota believes the technology’s adoption will come more slowly than predicted. To bridge the gap, Toyota emphasizes an incremental, step-by-step approach to redefining the relationship between car and driver. Compass reports on Toyota’s recent media demo.

I am sitting in the back seat of a Toyota Prius hybrid on the Tokyo Metropolitan Expressway near Haneda Airport during rush hour. From my location behind the passenger seat, I have a clear view of the driver.

We’re moving at 60 kilometers per hour (36 mph) when the driver suddenly takes his hands off the wheel, yet the car remains perfectly controlled as we exit the freeway, braking and accelerating to maintain a consistent following distance. How? Our Prius is connected to the one ahead of us via radio. The lead car controls our steering and uses a millimeter-wave frequency to measure the distance between vehicles maintaining a constant buffer as speeds change.

This is autonomous driving, Toyota Motors-style, made possible by the company’s Cooperative Adaptive Cruise Control system developed as part of Toyota’s latest generation of driving-support systems.

Although several of its competitors have promised to introduce autonomous cars by 2020, Toyota believes it will take longer to replicate the working relationship between driver and car. It also doubts drivers will cede control to their cars easily, so it is focusing on driver-support systems rather than driver-free ones to bridge the gap.


At a recent Toyota-sponsored press tour, Toyota demonstrated a number of its technologies:

Pre-Collision System. In a vast parking lot marked into lanes, the car reaches cruising speed. The parking lot is empty, except for a parked van. As the car approaches the van, a mechanized human dummy appears. Before the driver reacts, the car slams on its brakes and steers right – avoiding the dummy without leaving its lane.

Panoramic View. By positioning four wide-angle cameras outside the cabin, Toyota gives drivers a view of the entire vehicle and its surroundings, allowing drivers to see pedestrians in what otherwise would be blind spots.

Lane Trace Control. This system adjusts a vehicle’s steering angle, speed and braking force to maintain an optimal position within a lane.

These technologies were installed in different vehicles; Toyota has not yet integrated them into a single package.


To support its efforts, Toyota created a Collaborative Safety Research Center, which is pursuing 26 research projects with 16 universities and hospitals worldwide. Chuck Gulash, based in Ann Arbor, Michigan (USA), is the director.



The job of controlling a vehicle has become infinitely more complex, Gulash explained. Today, drivers need to monitor information screens and heads-up displays while listening to voice prompts and a variety of warning sounds. Gulash said Toyota’s goal is to create a “true inter-relationship between the driver and an intelligent vehicle,” with the car and the driver as “teammates” who share the common goal of saving lives.

“The best teammates learn from each other,” Gulash said. “They watch, listen and remember. They adapt. They communicate. And they assist, when needed. Over time, a foundation of trust is built.”


Toyota is using Microsoft Kinect technology to program a car it calls the Driver Awareness Research Vehicle (DARV) to recognize multiple drivers. The stationary car and driver have a conversation about the destination before a trip begins; the car informs the driver about traffic conditions and suggests alternate routes.

Three Toyota vehicles equipped with Cooperative-Adaptive Cruise Control automatically maintain a consistent distance between the cars, even as speeds change. (Image © Toyota)

Toyota also has worked with Stanford University’s driving simulator, which tracks brain activity with sensors and eye movements with electronic headgear, documenting the driver’s brain activity and behavior. “The simulator can shift from fully automated control, to driver in full control, to mixed control, instantly,” Gulash said.

Such tracking technology could help create a system in which a car driving itself in autonomous mode could send a “take over now” alert to the driver if the vehicle senses conditions are becoming too complex for it to manage.

But will humans be able to develop situational awareness and react fast enough if a car suddenly throws control back to them? Toyota is trying to find the answer, but the question is one reason the company doubts the mass market will be ready for fully autonomous cars any time soon. The balance between human and machine may shift to some extent, but Gulash and his team believe that humans will need to retain ultimate control — at least for now. ◆

Autonomous automobiles

Driverless vehicles will make travel safer, less stressful

William J. Holstein

3 min read

T. E. “Ed” Schlesinger, dean of engineering at Johns Hopkins University in Baltimore, believes that an era of autonomous vehicles will arrive within the next 15-20 years. Schlesinger, who founded General Motors-Carnegie Mellon Autonomous Driving Collaborative Research Lab  before joining Johns Hopkins, is an expert in engineering systems for driverless cars.

COMPASS: We’ve seen a raft of announcements that auto manufacturers are aligning themselves with Google, Apple or Microsoft to create operating systems for cars that will allow them more autonomy. Why now?

ED SCHLESINGER: The added value in automobiles is becoming the information technology infrastructure, both within the automobile and for the automobile to communicate with other vehicles. It’s as much about economics as anything else. Style and comfort will always matter to people. But more and more, the technologies that guarantee safety and connectivity will become increasingly important.

The National Highway Traffic Safety Administration recently announced that it wants all cars to broadcast their location, speed and other data. What is your analysis of this announcement?

ES: That’s the beginning of building out that infrastructure. One can imagine the cars communicating with one another and with other devices and systems in the world.

So how is autonomous driving going to actually unfold?

ES: The regulatory environment will have to allow these technologies to be deployed. There will need to be some sort of standardization in terms of interoperability. You can’t have a situation where different cars and devices made by different manufacturers are not able to talk to each other in a reliable manner. It’s not going to be that different from what we see in air traffic control systems. We expect both Airbuses and Boeings to be connected in a way that everything functions seamlessly.

Won’t manufacturers from North America, Europe, Japan, South Korea and China compete to create their own systems? How can a global set of standards emerge?

ES: I think there may be initially a kind of jockeying for position as to who will set the standard. But ultimately, it’s to the benefit of everyone that there is a standard. We’ve seen this in other technologies. Televisions in Europe were one system and televisions in the United States were another. If you bought a DVD in one country, you couldn’t necessarily play it on a device from another country. It seems unattractive to have cars that only operate in the United States and others that only operate in Europe. I think manufacturers around the world will have an incentive to converge on a standard.



So you believe that all the issues can be sorted out to allow for completely autonomous vehicles?

ES: Yes, my personal feeling is that we will have greater autonomy and then complete autonomy. Autonomous vehicles will start entering in niche applications. In seaports, the 18-wheeler trucks will become autonomous. Rental car buses in airports will become autonomous. There will be dedicated lanes from Los Angeles to San Francisco, and autonomous trucks will make 80% of the journey. When it gets close to San Francisco, a human driver will get into the cab to take it into more complex routing.

And all this will be safer than what we have today?

ES: I have complete confidence that autonomous vehicles are safer, much like air travel. Everyone knows that air travel is far safer, statistically, than car travel. The system uses very well-engineered vehicles, and the pilots are highly trained, monitored and reviewed. You have a tremendously safe, reliable system. It will be the same with autonomous vehicles. You won’t have any individuals getting tired or drunk or distracted. The vehicles will be able to know what traffic patterns are, what the ideal routes are, and overall congestion will be relieved. Nobody will be making errors. There will be far fewer accidents. And there will be no road rage.

After this system is implemented, society will look back and ask, “Can you imagine that any 16-year-old was allowed to get behind the wheel of a 2,000-pound car and start driving it?” It will seem absurd that we allowed that to happen. ◆

Watch Carnegie Mellon’s driverless car in action:

Virtual training

Simulations are more than fun and games

Martin Koppe

3 min read

Training technicians and engineers involves costly, fragile and sometimes hazardous equipment. The alternative? Virtual labs. Based on digital technology, these labs have become powerful learning tools, allowing students and teachers to overcome the constraints of the physical world to train safely.

The Internet is revolutionizing education with MOOCs (Massive Open Online Courses), distance learning, flip teaching and more. While these online learning tools have already proven effective, some skills still require hands-on training. Whether learning to work in a nuclear power plant or run an oil rig, the virtual lab provides hands-on experience in a setting where making an error is safe and nothing blows up.


With 3D simulation, the learner interacts with digital equipment that behaves like the actual device, providing a flexible setting to gain understanding through all kinds of practical exercises, from the most basic to the highly complex. The idea, which simulation software experts have dreamed of for more than 20 years, took shape with the emergence of computing tools powerful enough to run these compute-intensive programs.



“Physical systems are excellent teaching tools, but they must be multiplied by the number of students so that everyone can use them at the same time,” said Frédéric Xerri, a teacher in mechanical engineering at Louis Armand High School in Nogent-sur-Marne, France. “For hands-on exercises in my industrial product design curriculum, I use high-tech equipment; some physical models cost between €10,000 (US$13,500) and €15,000 (US$20,500). In my classes of 24 students, this poses financial difficulties, even if I divide students into small groups. Thanks to the virtual lab, students can experiment on a realistic model of the tool on their computers and then take turns trying out the real system. This ‘cyber-physical’ combination optimizes the time spent on the actual demonstrator and offers a more instructional experience.”

Another advantage is that the learner can make mistakes in the virtual world, which better prepares him to handle the delicate physical equipment, thus promoting the idea of “getting it right the first time.”


The Georgia Institute of Technology (Georgia Tech), a university in Atlanta that counts engineering as one of its specialties, organizes summer workshops for high school students. There, teenagers use the LEGO Mindstorms NXT2, a popular introductory kit for programming and robotics that serves as a virtual form of hands-on training.

“The students must build robots capable of moving through an obstacle course,” explained research engineer Srujal Patel, who supervises the students. “But they cannot do it with LEGOs alone. They must design the missing pieces themselves.” Students test everything on the computer and do not proceed to 3D printing their robots until their virtual trials are conclusive.

Students in different locations can share and work on common digital models at the same time. Together, they can tweak their models without having to be at the same computer, resulting in an optimized robot before manufacturing – without distance getting in the way.


Despite criticism surrounding loss of contact with reality, teachers’ enthusiasm leaves little room for doubt about the success of these methods. “It is tempting to do everything virtually, especially for students who were born in the digital age,” Xerri admitted. “But fortunately, we always need to check and give concrete expression to students’ work. Nothing can replace the feeling of accomplishment when we see that the object we have imagined actually works.”

Another advantage of this type of learning is that it avoids mishandling or deterioration of the physical model and eliminates the possibility of accidents caused by a careless student. The new design software that powers virtual labs also is used by companies where students will work, easing their transition from the academic world to the work world.


Schools across the globe are making manufacturers of educational training equipment consider this foray into the digital world a positive trend, not a form of competition. “We supply high-quality equipment and care about its return on investment,” said Dr. Tom Lee, chief of education at Ontario, Canada-based Quanser, a leading manufacturer of educational equipment for engineering students, including gyros, helicopters, mechanical arms and pendulums. “To reduce the cost of use per student, we have partnered with CAD specialists to offer both our tools and their virtual counterparts. This allows us to break into markets that would otherwise have been inaccessible.”

Given the wide number of benefits offered by the combination of real and virtual training, these pioneers are blazing a trail that many other educators are likely to follow. Especially in countries where access to education is limited or inaccessible to many, virtual training can open the door to much-needed knowledge and opportunity. ◆

Louis Vuitton’s newest landmark

The jewel of the Bois de Boulogne

Dominique Fidel

3 min read

This fall, the Louis Vuitton Foundation for Creation will spread its glass wings in the Bois de Boulogne in Paris. This epic project is guided by two overarching themes: the celebration of visual art and the creativity of meeting a unique technological challenge.

A spaceship, a cloud, a crystal chrysalis: Observers have found many metaphors to describe the structure under construction for the Louis Vuitton Foundation for Creation, a new art museum that will open to the public this fall. Born from a dream shared by architect Frank Gehry and Bernard Arnault, CEO of LVMH (Louis Vuitton Moët Hennessy) and caretaker of the one of the world’s largest private art collections, this glittering glass gem is the new architectural jewel of western Paris.

The foundation, officially founded in 2006 after a 15-year strategy of cultural sponsorship, supports the commitment of Arnault and the LVMH group to contemporary art. “Artistic creation has always been central to the Louis Vuitton fashion house,” said Christian Reyne, the foundation’s deputy director. “With this internationally recognized venue, we offer a great cultural breathing space in western Paris. The goal is to build more bridges connecting heritage, innovation, youth and tradition.”

The artistic programming of the “glass ship” remains secret for now, but its purpose is clear. The foundation will offer permanent collections and temporary exhibitions of modern and contemporary art, plus multidisciplinary events, debates and conferences. The collection as a whole is known to be one of the world’s largest private art collections, and the opening will mark the first time that many of the pieces have been on public display.


For Gehry’s second structure in the French capital (after the American Center, in 1993), the Canadian-American master architect proposed a revolutionary project, breaking with the signature style he has developed throughout his career. Here, visitors will find no shiny metal casing or deformed, powerful volumes. Despite its imposing proportions, the foundation’s new home has an airy silhouette that seems to fly above the Bois de Boulogne by sheer force of its glass wings.

This architectural sculpture will soon house 11 exhibition galleries, or “chapels,” in a flexible, modular convention space that can accommodate nearly 400 people. With plenty of room for reception, entertainment, leisure and research areas, the building covers an area of 145,313 square feet (13,500 square meters) on two levels and is 150 feet (46 meters) high.

Only by observing the building up close can one appreciate its structural complexity. In effect, the foundation’s home consists of two structures overlapping one another. In the center is the “Iceberg,” the functioning body of the building, made of reinforced concrete, steel and wood and covered with a façade of approximately 19,000 white concrete wafers. Surrounding this mass is the glass superstructure, consisting of 12 cantilevered sails of curved glass with a wingspan of 98 feet (30 meters) each. The sails have a steel-and-wood frame covered in aluminum mesh, which in turn supports the 3,400 glass panels.


The design took 13 years of development. “This is a one-of-a-kind creation – without a doubt, one of Frank Gehry’s wildest challenges,” Reyne said. “He made an initial sketch in 2001, after his first meeting with Bernard Arnault. Three years later, the Gehry Partners teams began working on the architectural model, and the following year LVMH commissioned the Paris office of the Studios Architecture agency to manage the project in France.”

Studios Architecture Paris was a natural choice due to its past experience working with LVMH. “We already knew LVMH because we had done the interiors of two of their office buildings,” said James Cowey, the company’s CEO. “And we had worked with Gehry on the IAC building in New York. But we never imagined what challenges were in store for us, both art- and technology-wise.”

View of the eastern façade of Louis Vuitton Foundation’s building in November 2013 (Image © Louis Vuitton Foundation / Louis-Marie Dauzat – 2013)

Renaud Farrenq, the engineer in charge of coordinating the project at Studios Architecture, lists some of the challenges the team faced: “Unpredictable curves and counter-curves in the central building, extremely demanding load calculations, bending plates of glass to the millimeter, identifying industrial partners capable of carrying out work that had never been done before, and fire and wind testing.”


Given the complexity, the team had more than 800 people working simultaneously during the study phase, and then 750 workers at the peak of construction.

“In this kind of project, requiring ongoing technological innovation, seamless cooperation among everyone involved is crucial,” Reyne said. “In fact, we decided to put all the teams – architects, engineers and contractor – together in one place. In addition, we all relied on a single tool: 3D design software.”

The software, built by Gehry Technologies on top of a three-dimensional (3D) computer-aided design solution developed for the aerospace industry, brings together the data from all the different trades – including construction, reinforced concrete, glass, plumbing, electrical, etc. – allowing everyone to work on the same digital model and to share information in real time.

“It was the first time that this software was used on a construction site in France,” Cowey said. “So we had to adapt it to French reality, especially to French law. But it’s clear that it played a key role in the success of a project that pushed the limits as much as possible.”

In 2012, its use of the software program earned the Louis Vuitton Foundation the BIM (building information model) Excellence award, bestowed by the American Institute of Architects. Since then, the building has won several prestigious awards and is now studied in the architecture school’s curriculum at Harvard University. ◆

Automotive age in transition

From jobs to privacy, driverless cars will bring profound social change

William J. Holstein

6 min read

Autonomous vehicles (AVs) are poised to create massive societal impacts while transforming urban landscapes and national economies. From the work we do to how our cars express our personal styles, the automobile’s traditional impact on society is about to fundamentally change.

The automobile has been one of the central organizing technologies of life for more than a century. It has shaped where people live, how they get to work, how they socialize, how they entertain themselves and much, much more.

Now, a fundamental transformation in people’s relationships with their vehicles is at hand: cars that drive themselves. Driver-assistance technologies are already common in new automobiles, and new cars are becoming smarter every year.

Raj Rajkumar, co-director of the General Motors-Carnegie Mellon Autonomous Driving Collaborative Research Lab in Pittsburgh, predicts that fully autonomous driving will be available in industrialized countries by 2020, just six years from now. Resolving issues of liability and insurance and winning broad societal acceptance of the technology will take five more. “I think humans will remain in charge at least until 2025,” he said.


The advent of AVs portends benefits – and costs – for billions of people. The World Health Organization reports that 1.2 million die in automobile accidents each year worldwide, a statistic The Washington Post estimates could triple by 2030 as consumers in developing nations acquire their first automobiles. “If you add up the economic costs of these accidents and deaths, it easily exceeds half a trillion (US) dollars a year,” Rajkumar said.

AVs should sharply reduce or curtail highway deaths by eliminating driver errors, including drunk and distracted driving. That, in turn, will greatly reduce the human suffering and economic disruption caused by highway fatalities.

“These automobiles will behave differently from a human-driven car,” said Alexander Mankowsky, head of the Future Studies, Society and Technology Research Group at Daimler AG in Stuttgart, Germany. “You, me, everybody will have to learn to interact with these cars in new ways.” A pedestrian, for example, won’t be able to make eye contact with a human driver at an intersection to assess his or her intentions.

1.2 million

The World Health Organization reports that 1.2 million people die in automobile accidents worldwide each year.

The arrival of AVs also could redraw the maps of urban areas, airports and shopping malls. Destination-specific parking lots will no longer be required because cars will be able to park themselves in centralized garages or regional lots. Donald Shoup, author of The High Cost of Free Parking, estimates that 31% of central business district space is dedicated to parking. Driverless cars will change that pattern, freeing land for residential and commercial uses.

Paradoxically, if automated cars allow the driver to perform other tasks, commuting may no longer be a costly waste of time, allowing workers to choose to live outside the expensive urban core.

“The introduction of AVs could strengthen a trend toward even more dispersed and low-density land-use patterns surrounding metropolitan regions,” the RAND Corporation, a nonprofit research institution based in Santa Monica, California (USA), predicted in a 217-page study released in January 2014.


Experts also predict that traffic, urban congestion and air pollution will be relieved because self-driving cars will choose the most efficient traffic routes. These vehicles also will improve mobility for children, the elderly and the physically challenged.

“Don’t think about the vehicle being autonomous,” said T. E. “Ed” Schlesinger, dean of engineering at Johns Hopkins University in Baltimore. “Think that when you buy a vehicle, it comes with a chauffeur. The chauffer doesn’t require a salary, and this chauffeur never eats, sleeps, drinks or gets tired. It is always on call. If you think in those terms, it’s easier to imagine the social implications.”

The elderly should be major beneficiaries. “It doesn’t matter how old I am, I am still mobile,” Schlesinger said. “I can visit my grandchildren, get medicines and go to the store. That keeps elderly people out of assisted-living centers. Parents also are not running around picking up kids from soccer practice. The kids get into the car and it takes them to piano practice. It takes them to soccer practice. The concept of a ‘driving age’ goes away.”

Google’s fleet of self-driving vehicles has logged 500,000 miles (nearly 805,000 kilometers) of testing without mishap, moving the prospect of driverless cars well beyond the theoretical. To demonstrate how such vehicles might affect physically challenged citizens, Google produced a video of one of its vehicles taking a blind man to visit a drive-through restaurant and pick up his dry cleaning.

“This would change my life,” says the “driver,” Steve Mahan, who is 95% blind. The video clip has been viewed 5.5 million times since its release on YouTube in March 2012.


In economic terms, the productivity of entire nations could be boosted if people who might otherwise be stuck in traffic can devote that time to work. But driverless cars also could create significant churn in employment patterns, said John Challenger, chief executive officer of outplacement firm Challenger, Christmas and Gray in Chicago.

AVs will eliminate jobs for professional drivers, including taxi and truck drivers, and service jobs such as parking attendants. A family might need just one car, or several families might be able to share a car, reducing demand for automobiles and the workers who produce them. If accidents no longer occur, automobile insurance and the jobs of sales people and claims adjusters may be eliminated, along with the need for body shops to repair crash damage and vast graveyards for storing crumpled vehicles.

At the same time, AVs will create new careers in building and maintaining fleets of smart cars and the communications infrastructure that will connect them.

“There will be fewer jobs for drivers and fewer jobs at gas stations if the cars are electric,” Challenger said. “There could be fewer jobs for auto mechanics if there isn’t an engine in the sense we understand it today. But that doesn’t mean there won’t be more jobs in health care, teaching and coaching.”


Society also will need to grapple with the legal implications of intelligent cars, including liability and privacy issues.

If a car piloted by a computerized system crashes with another or strikes a pedestrian, for example, who is at fault and who pays – the driver, the automobile maker, the maker of the automation system, or the network that controls it? Vehicles will carry event-data recorders, which will be able to reconstruct any accident, but the nuances of legal responsibility are not yet defined.

Automobiles will no longer be private sanctuaries, but components in a network that “knows” the location, speed and driving behavior of every vehicle at every moment. Will law enforcement agencies have access to this information? Will spouses be able to monitor their significant others’ movements? Will parents know when a teen goes out of bounds? Courts may spend years litigating such issues.


The National Highway Traffic Safety Administration (NHTSA) in the USA recently conducted a field test involving 3,000 vehicles equipped with a version of Wi-Fi communications. Based on the test results, NHTSA announced in February 2014 that it plans to require all new cars to broadcast their location, speed direction and other data. NHTSA didn’t specify an exact date, but the direction is clear.

“That sent a strong message to everyone involved in it that this is really going to happen,” said John Capp, director of electrical and active safety systems for General Motors and the company’s key expert on AVs. US states including Nevada, California and Florida also have enacted laws permitting AVs to drive on their roads.

At the same time governments are pushing, automobile manufacturers have developed alliances with high-tech leaders including Google, Apple and Microsoft, whose help will be necessary to operate and link together AVs.

In Japan, Toyota, Nissan and Honda have agreed to establish a wavelength for their vehicles to communicate, putting Japan in the lead in developing standards for AVs. In Europe, Daimler’s Mankowsky said that city centers will drive the process of requiring that vehicles entering their boundaries at certain hours be equipped with advanced communications.

If city centers declare themselves safety zones and allow only certain vehicles to enter, driving city streets may become safer. But Mankowsky estimates it could take 15-20 years for a nation’s entire automotive fleet to become equipped with advanced features. In the meantime, low-income motorists with older cars could find their mobility limited.


AV experiments are occurring most often in circumstances where people move in repetitive and predictable ways. The UK, for example, is engaged in a US$2.5 million pilot of autonomous “pods” on special pathways in Milton Keynes, 45 miles (72 kilometers) northwest of London. The goal is to create a system with 100 two-passenger driverless pods by 2017.

Likewise, the Nanyang Technological University in Singapore plans to launch the city-state’s first driverless shuttle transportation system, linking the university’s campus with a research park 1.2 miles (2 kilometers) away. The shuttle, carrying as many as eight passengers, will intermingle with other traffic, integrating driverless vehicles into Singapore’s transport system.

Experts are hotly debating whether AVs can establish themselves in emerging markets such as China, which has little infrastructure in vast areas, and India, renowned for its chaotic traffic.

The RAND study suggests that developing nations could advance more rapidly than developed ones in the field of AVs. “Countries with limited existing vehicle infrastructure could ‘leapfrog’ to AV technology,” the study stated. “Just as mobile phones allowed developing countries to skip the development of expensive landline infrastructure, AV technology might permit countries to skip some aspects of conventional travel infrastructure.”

Carnegie Mellon’s Rajkumar disagrees. For example, he doubts driving habits in his native India will easily accommodate AVs. “In places like Mumbai, driving etiquette is very different,” he said.

“Lane markers are just suggestions. You can have four to five vehicles riding side-by-side on a three-lane urban road.”


Many unknowns regarding self-driving vehicles remain worldwide. Even in the best-case scenarios, AVs will create winners and losers. But RAND argues that, on balance, autonomous driving will be positive for society.

“Overall, we think the benefits of AV technology – including decreased crashes, increased mobility and increases in fuel economy – outweigh the likely disadvantages and costs,” the report concluded. ◆

Watch the historic Mercedes-Benz AV test drive

Impact investing

Financing good works can be profitable, too

Charles Wallace

5 min read

With foreign aid and philanthropy depleted by years of austerity, a new type of investment is gaining momentum, offering market-rate returns to help solve some of the world’s most intractable social problems. Known as Impact Investing, it collected investments of US$3.8 billion in 2013 and is projected to reach US$400 billion by 2023.

Innovative solutions to the world’s social problems abound – if you can raise the money to pay for them. Charitable foundations, while helpful, have too few resources to tackle all the world’s needs. Many countries are still recovering from the 2008 global recession, and government contributions have declined as tax revenues have fallen.

Into this vacuum, an entirely new form of financing for good works has emerged: Impact Investing (II). The concept recognizes that philanthropy alone will never raise enough funding and that, once the money is spent, the charity must raise more. In contrast, II harnesses the power of the global financial markets to raise capital for projects – often start-up businesses – that have social or environmental benefits. The investments pay real returns to investors while generating an ongoing stream of revenue for worthy causes.

“We’ve got a great idea here that can transform our societies by using the power of finance to tackle the most difficult social problems,” British Prime Minister David Cameron, who hosted a global II conference in 2013, told the delegates. “Problems that have frustrated government after government, country after country, generation after generation.”


II was born at the Rockefeller Foundation in 2007 but has spread worldwide. According to a study by US investment bank JPMorgan Chase, the II market attracted US$3.8 billion in new capital in 2013 alone. JPMorgan Chase estimates the market will grow to US$400 billion in the next two decades.

“While the market is still in a very nascent state, we’ve noticed an increased number of investors who are actively exploring investing using these strategies,” said Luther M. Ragin Jr., CEO of the Global Impact Investing Network, a nonprofit organization dedicated to increasing the scale and effectiveness of II. “Managers who had first-time funds several years ago are now launching second and third funds that are raising significantly higher capital from newer and more robust groups of investors.”

How does II work? Consider a UK business that collects waste oil from motorcar service stations. Offering automobile owners an alternative to dumping their motor oil down a drain creates an obvious positive benefit to the environment, yet it might have trouble attracting start-up capital. But Bridge Ventures, an II fund manager in London, raised US$460 million in capital from a group of pension funds, creating a general II fund that provided capital to the company. The pension funds earned a return on their investments of between 12% and 20%, comparable to other market-rate returns.

Thanks to a pump provided by Rent-to-Own Africa, a farmer in Zambia irrigates his crops. The company, which received startup funding from the
Lundin Foundation, provides farm equipment at affordable rents that convert to ownership. (Image by Eliot Marcus, © Rent-to-Own Zambia)

Other II projects have included start-up companies that help the poor get food in California, make mobile payments available to the unbanked in India, and let patients in countries where drug counterfeiting is common use their cellphones to determine if their medicines are genuine. All the investments have one thing in common: They must have a social or environmental benefit in addition to providing a real return on initial investment. Most of the investments are done as private equity, meaning investors buy a part of the company; others come in the form of loans.

“Financial inclusion and renewable energy infrastructure are two areas that offer fantastic risk-adjusted returns while achieving great social and environmental benefits,” said Adam Wolfensohn, who runs the Impact Investing area of fund manager Wolfensohn Fund Management, L.P. of New York, London and New Delhi.

“We focus on green energy, energy efficiency and resource efficiency like water treatment,” said Anthony Hewat, managing principal at Johannesburg, South Africa-based Lereko Metier Sustainable Capital Fund. “But it’s no different from our normal investment process.”

Hewat last year raised 690 million South African rand (US$61 million) to fund newly private power companies in Africa that use renewable energy. Investors included TransNet, the pension fund of former South African government transportation workers.


Impact investments attract capital from several sources, primarily pension funds and high-net-worth individuals with excess capital to invest who want to create social benefit at the same time.

An investor can invest directly in an end-user. For example, a major US insurance company recently made a US$120 million investment in bonds to finance a housing complex for low-income individuals. Interest on the bonds will be paid from rental income generated by the housing project.

In an II fund, a team of advisers collects investors’ capital and then buys shares in a variety of companies. More than 280 II funds now exist, mainly in Europe and the USA, but also scattered throughout Asia, Africa and Latin America. Backing multiple projects helps reduce the individual risk to investors because their funds are spread over several activities. The managers also have more experience evaluating II than normal fund managers have.

2013 saw the rise of II fund of funds, which pool investors’ money and use a middleman manager to invest in a variety of different II funds, reducing the risk and increasing the reach of each investment dollar. The first to market was a US$55 million II fund from UBS, the Swiss bank, which raised funds from its private-wealth customers.

“The whole notion of responsible behavior is much more part of everyone’s thinking than a generation ago, so the next generation will clearly have more affinity and more interest for Impact Investing,” said Mario Marconi, UBS’s global head of family services.

Another innovation is the so-called social impact bond (SIB). Bank of America Merrill Lynch raised US$13.5 million in December 2013 for the State of New York using an SIB to fund a job-training program for formerly incarcerated individuals, which reduces their chance of returning to prison. Thanks to the backing of state governments, investors got market-rate interest and lowered risk.

Often, big investors like pension funds are reluctant to take on big risks, especially those that are difficult to quantify. To lower the risk, a group of first investors agrees to provide “catalytic first-loss capital,” accepting the lion’s share of an investment’s initial losses. First investors get paid a higher rate of return for their higher risk, but they also protect pension funds against some initial losses if the investment turns sour.


II still faces a number of hurdles. One problem is size. Most large institutional investors want to invest in chunks of US$50 million or more; they lack the time or personnel to manage many smaller investments. Also, laws often prohibit them from owning more than 20% of a fund. Because II funds are relatively small, institutional investors may have difficulty finding investments of sufficient size.

A patient in Nigeria gets instant confirmation that a medicine is genuine, using Sproxil’s free mobile authentication service. Fake malaria and tuberculosis drugs cause more than 700,000 deaths each year. The Acumen Fund is one of Sproxil’s investors. (Image © Sproxil)

Quantifying risk is another issue. Most investors know the risk of a power station in London, for example, but what about a similar project in Zambia? Companies are springing up to offer risk-assessment services, but lack of information remains an impediment.

Still, proponents believe the future for II is bright. “When we look around us today, some of us can feel that we are on the brink of a new entrepreneurial revolution, a social revolution,” said Sir Ronald Cohen, chairman of the G8 Social Impact Investment Taskforce organized by the world’s eight largest industrialized countries. “Impact Investing portends a real revolution driven by innovation.” ◆

Charles Wallace is a former foreign correspondent who writes about global finance from his base in New York.

GIIN’s Luther Ragin discusses Impact Investing:

Human organs on demand?

3D bio-printing opens new vistas for transplants and treatments

Charles Wallace

5 min read

3D printing is revolutionizing global industrial manufacturing, but it’s also being adopted to create human tissues. Researchers are dedicated to finding a way to produce desperately needed human organs in the lab, including kidneys, livers and hearts.

At a laboratory in San Diego, a tiny piece of liver tissue has been functioning for more than 40 days, producing proteins, enzymes and cholesterol just as a full human liver does. But this tissue – a mere 3 mm square and 0.5 mm thick – didn’t come from a human liver. It was “manufactured” on a special 3D printer, evidence of a genuine revolution taking place in the world of tissue engineering.

“This liver tissue is behaving a lot more like a full liver than any previous model, and that’s what’s really powerful,” said Keith Murphy, CEO of Organovo Holdings Inc., a company formed in 2008 with Gabor Forgacs, a professor of bioengineering at the University of Missouri (USA), to commercialize the breakthroughs in what has been called 3D bio-printing. “You have to keep going with the research and show more data, but the fact that it performs like a liver leads you to believe it’s going to also perform like a liver when you start to test drugs in that setting.”

Murphy’s excitement is shared by researchers in laboratories not only in the United States but in the UK, France, Germany and Japan as well. These far-flung scientists are deploying a wide range of specialized computer and printing technology in an effort to perfect the manufacturing of human tissues that include skin, bladders, bones and teeth.

The initial goal is to produce tissues that can take the place of natural human ones in the US$50 billion annual drug development business. The ultimate goal, however, is to produce complete, functioning organs, including livers, kidneys and hearts, which are currently available only from human donors, often with long waiting lists. If the research succeeds it will save thousands of lives each year and eliminate the billions spent annually on lifesaving yet debilitating procedures like kidney dialysis, which treats but does not cure.


Bio-printing is a specialized form of 3D printing, which uses a computer-controlled printer to build up layer upon layer of plastic or metal in a pre-determined pattern, then fuses the layers together to make parts for everything from jewelry to an airliner’s jet engine. Instead of plastic or metal, however, bio-printing layers “bio-ink” beads composed of living cells, alternating with a gel that holds the beads together. Remarkably, once “printed” into tissue, the cells – either embryonic stem cells that can be trained to perform any bodily function, or adult stem cells chosen for a specific purpose – organize themselves into tissue and start performing the same functions they would in the human body.

3D bio-printing was the brainchild of Makoto Nakamura, now a professor of engineering life sciences at the University of Toyama in Japan, who recognized that the tiny drops of ink deposited by an ink-jet printer were similar in size to biological cells. He built the first 3D bio-printer in 2006.

“We demonstrated that 3D structures can be fabricated using different types of cells,” Nakamura said. “We succeeded in fabricating double-layered tubes with different cell types – vascular endothelial cells inside and smooth muscle cells outside – mimicking the structure of blood vessels.”


At the Laser Zentrum Hannover e.V. (LZH) in Germany, a university center specializing in laser research, engineers have succeeded for the first time in printing 3D human skin tissue with two different cell types: fibroblasts and keratinocytes. In a laboratory environment, it is essential for two different cells to grow side by side to resemble human tissue.

A team of Organovo tissue engineers works to create functional human tissues at one of the company’s development labs. (Image © Organovo Holdings Inc.)

“We tested this skin graft in humans and also in mice, and it works quite well as tissue to cover a wound,” said Lothar Koch, head of the bio-printing group at the LZH. Initially, the skin is being transplanted into mice bred to be immune-deficient, so there will be no rejection of foreign tissue. But Koch said the ultimate goal is to use a person’s own skin cells to print out new skin, avoiding the rejection issue.

So far, all of the living tissue that has been bio-printed has been extremely thin because scientists have not yet figured out how to distribute nourishment from the outer layers to inner layers of tissue. “The skin tissue is 250 microns thick,” Koch said. “If you want it thicker then you need blood vessels, otherwise the cells inside will die.”

US$5 billion

More than US$5 billion is lost annually when new drugs fail in late-stage human testing for liver toxicity.

Once thicker bio-printed human skin becomes viable, it will aid the US$265 billion annual global cosmetics market by allowing new beauty products to be tested for safety and skin reactions without animal testing, which is banned in Europe. As a result, cosmetic firms are big financial backers of bio-printing. In late 2013, L’Oreal entered into agreement with Organovo to explore the use of 3D skin for testing skin care products.

The pharmaceutical industry is expected to be another big source of revenue. Each year, for example, more than US$5 billion invested in developing new drugs is lost when the drugs fail in late-stage human testing for liver toxicity.

Murphy said his firm expects to introduce a 3D liver by the end of 2014 that can be used to test drugs in the laboratory at an early stage, replacing less accurate tests on cells in a petri dish.

Because petri-dish cells survive for only two days, those tests fail to replicate the cumulative effect of repeated drug doses over prolonged periods in actual human use. But with the Organovo 3D liver, drugs can be tested in repeated doses, giving researchers more accurate indications of liver toxicity earlier in a drug’s development. This will reduce the investment in drugs destined to fail and avoid negative side effects in human trials.


Scientists at Columbia University in New York City are working on creating bio-printed teeth and joints, which theoretically should be simpler to sustain because they don’t require nourishment from blood vessels. For example, the Columbia team implanted an incisor-shaped tooth made of a 3D-printed scaffold into a rat’s jawbone. Within two months, the implant had prompted the growth of periodontal ligaments and newly formed bone. They also implanted bio-printed hipbones into rabbits, which were able to begin walking on their new joints within a few weeks.

There are four levels of tissue complexity. The simplest are flat structures like skin; then tubular structures like blood vessels; followed by hollow organs such as the stomach or bladder; and then the most complicated to reproduce: solid organs, such as a heart or kidney, comprising many different types of cells. To produce solid organs, scientists must figure out how to supply nutrition through blood vessels of varying size. As Koch pointed out, by the time the organ is printed, given the current level of knowledge, the outer layers would begin to die for lack of nourishment.

“There’s a lot of fertile ground and unmet needs, especially in pharma,” Murphy said. “Thick, complex organs will take longer, but tissues that are on the order of a millimeter of thickness in the smallest of the three dimensions can actually be built sooner. Those are things we’re targeting to potentially put into human clinical trials within five years.” ◆

Watch Anthony Atala at TED:

The 2013 Nobel Prize

Digital modeling and simulation yield breakthroughs in chemistry

Eric Glover

3 min read

By awarding the prize to three scientists who pioneered the use of computers, the 2013 Nobel jury in chemistry recognized the fundamental contribution of digital modeling to laboratory research. Compass talked to Martin Karplus, Michael Levitt and Arieh Warshel about their breakthrough research.

“In the past, chemists created models of molecules using plastic balls and sticks. Today, modeling is done on computers; computer simulation has become crucial for modern chemistry.” With these words, the jury for the 2013 Nobel Prize in Chemistry praised the contributions of Martin Karplus, Michael Levitt and Arieh Warshel to the vital science of molecular modeling and simulation.

The three winners’ accomplishment lies in the originality of their research method, devised more than 40 years ago. Their approach combines two levels of analysis. The first, from “classical” Newtonian physics, allows researchers to understand large molecules as a whole. The second method, quantum physics, aims to predict chemical reactions on a very small scale by simulating the behavior of atoms and electrons. While classical physics describes a molecule at rest, only the quantum approach allows scientists to understand the interactions that can take place among various molecules.

Although quantum physics is by far the most accurate, it has a major drawback: The computation time needed to replicate all of the atomic particles of all the molecules involved in the reaction is beyond the capabilities of even the most modern computers. The hybrid solution devised by the three winners overcomes this technical limitation by focusing on only some of the particles with digital, computer simulation. The results are simulations the Nobel jury described as “so realistic that they predict the outcome of traditional experiments.”

“Computer simulation allows us to observe the very structure of the protein,” said Arieh Warshel, a member of the Nobel Prize-winning team. “For example, it allows us to understand how an enzyme acts on the food digestion process. This is valuable information for future launches of new drugs.”


The scientific and human adventure that led to the Nobel Prize began in the 1970s, when Arieh Warshel joined Martin Karplus in his laboratory at Harvard University near Boston. Karplus had already developed computer-simulation programs based on the quantum approach. Warshel, meanwhile, was a recognized expert in intramolecular and intermolecular potentials (energy stored by the molecule that can be absorbed or released during a chemical reaction). And while working at the Weizmann Institute of Science in Israel, he collaborated with Michael Levitt on designing a high-performance model based on classical physics.

By 1972, Karplus and Warshel had published the first successful computer modeling of atoms, combining classical and quantum physics. One year later, Levitt joined them to work on modeling an enzymatic reaction. By 1976, their hybrid method was in widespread use worldwide.

Over the years, many improvements have enriched these pioneering efforts, taking particular advantage of the ever-improving performance of new generations of computers. “Current simulations are much less expensive and are conducted on much more powerful and complex systems,” Levitt said. “But the basic model remains the one developed in the ’70s.”


Today, computer modeling is commonly applied to all types of molecules, regardless of their size or geometry. The technique groups the atoms of a single molecule according to behavior. This enables scientists to perform quantum calculations – which consume most of the computing power – for only those parts of the molecule involved in the chemical reaction. The result saves time while improving computational accuracy.

The method has particular relevance to the pharmaceutical industry. Traditionally, developing a drug has relied on random experimentation on a wide variety of molecules, a costly and inefficient process. By 3D-modeling specific molecules and digitally simulating the mechanisms that enable or inhibit biological processes, manufacturers can better focus their research.

“I’m most proud of the fact that this method has contributed to the design of general antibody models,” Levitt said. “This research began in the ’80s and led to the creation of many of the best current anti-cancer molecules, such as trastuzumab and bevacizumab.”


Digital modeling and simulation remain as transformative today as they were 40 years ago. “I am convinced that, for the moment, simulation is the only valid approach to truly understanding biological functions at the molecular level,” Warshel said.

For the new Nobel winners, the power of computers is critical to the field’s future progress. At the same time, they stress the need to develop hybrid methods by improving quantum models.

“Chemists should understand that there are several ways to use simulation at various levels of scale,” Warshel said.

Levitt agreed. “Good models will be crucial for the evolution of chemistry, particularly on issues of ‘green’ chemistry, which are so vital to the future of humanity. Optimizing existing methods and inventing new approaches – this is a challenge that should mobilize the brainpower of a new generation of researchers.” ◆

Martin Karplus discusses winning the Nobel Prize:

Disruptive drones

Automated mini-aircraft offer a world of commercial possibilities

Doug Chovan

6 min read

Drones are widely used for everything from military reconnaissance to weather forecasting. Today, these unmanned aircraft systems (UAS) are poised to revolutionize the commercial sector. With applications ranging from autonomous parcel deliveries to agricultural research, drones are leading a new era in aviation – if regulators can agree on how to ensure their safe operation.

Until recently, drones were best known as stealthy flying machines, with wing-mounted missiles or advanced surveillance designed for high-risk military missions. While such drones do exist, the potential for drone applications in the commercial sector is moving to the forefront.

Amazon’s pursuit of small, propeller-driven drones to remotely deliver lightweight parcels virtually anywhere in the world has garnered most of the public’s attention, forever altering the prevailing image of what an everyday drone might look like. With drones smaller than radio-controlled airplanes already on the market, industry experts are no longer debating the viability of such technology – they are predicting its economic impact.


Although commercial use of drones is currently prohibited in the United States, the US Federal Aviation Administration (FAA) is developing guidelines for release by the end of 2015. In late 2013, the FAA also published a plan for integrating commercial drone use into the nation’s complex aviation system.

“The FAA Modernization and Reform Act of 2012 was the first bill that included language requiring the FAA to integrate unmanned aircraft with manned aircraft into the national airspace system,” said Ben Gielow, general counsel and senior government relations manager for the Association for Unmanned Vehicle Systems International (AUVSI) in Arlington, Virginia (USA). “Before then, the FAA viewed commercial use of unmanned aircraft more like a fad and wasn’t taking it too seriously.”

In the US, AUVSI, a nonprofit organization devoted to advancing unmanned systems and robotics, has worked with members of the US Congress to encourage the FAA to devote more resources to the commercial use of unmanned aircraft systems, Gielow said, primarily due to projections of the technology’s economic impact.

In 2013, AUVSI projected that the expansion of commercial drone technology could create more than 100,000 US jobs by 2025, with an overall nationwide economic impact of more than US$82 billion in the first decade of operation.

Until the FAA issues guidelines, however, drone enthusiasts will have to keep their enthusiasm in check. Because commercial use of drones without proper authorization is currently illegal, the FAA recently halted a Minnesota brewery from testing a drone to deliver beer to ice fisherman. Inspired by Amazon’s drone project, Lakemaid Beer posted an online video showing a 12-pack of beer taking flight under a six-propeller drone. Lakemaid’s President, Jack Supple, said he doesn’t plan to give up hope on his brewery’s idea and plans to be ready when the FAA gives the approval.


Drone delivering a package, portraying one of Amazon’s future delivery methods(Image © mipan /

Drone laws vary greatly from nation to nation. Canada is known for having the most complex drone laws. For example, adding a camera to a “model aircraft” immediately changes its status to “unmanned aerial vehicle,” which requires Special Flight Operations Certificates from Transport Canada, the agency that regulates Canadian airspace. Mexico’s Civil Aviation Authority, on the other hand, is much more relaxed, encouraging peaceful uses of drones. The Mexican government is also using drones for drug enforcement and university research.

In Europe, drone regulations are case-specific and require special certification under the European Aviation Safety Agency (EASA). Promising segments for drones that would be impacted by certification include humanitarian, disaster relief and medical applications.
Deutsche Post DHL, a delivery service based in Bonn, Germany, recently conducted its first successful package delivery using a drone. The company’s “Paketkopter” flew a bit more than 0.6 miles (1 kilometer) at an altitude of about 330 feet (100 meters), flying medicine from a pharmacy and over the Rhine River to one of DHL’s offices. The drone, which can carry a payload of up to 6.6 pounds (3 kilograms), was controlled remotely by a live operator, but DHL says an autonomous, GPS-piloted drone is possible.


In Asia, China currently confines drone use to government departments or state-linked businesses. Supervision of China’s mainland airspace is divided between the People’s Liberation Army and the Civil Aviation Administration, which is responsible for anything that flies lower than 3,280 feet (1,000 meters).

Australia’s Civil Aviation Safety Authority (CASA) limits drones to work purposes, including commercial operations. Drones are not for private or recreational use. To fly a drone requires a CASA-issued controller’s license, which in turn requires aviation knowledge in line with that of a private pilot’s license. Drones in Australia are approved to operate in unpopulated areas to a height of 394 feet (120 meters); they cannot operate in controlled airspace without special permission.


Drone delivering a package, portraying one of Amazon’s future delivery methods(Image © mipan /

While Amazon’s delivery-drone concept has taken center stage, numerous practical commercial applications are in use or in testing, ranging from transporting lightweight parcels to more industry-specific applications. AUVSI predicts that one of the most promising industries for drone use is agriculture, where drone applications include creating aerial maps to optimize water and fertilizer distribution, fertilizer application and delivery of spare parts to farmers for equipment emergencies.

The first unmanned, remote-controlled helicopter for crop dusting was invented in 1987 by Yamaha Motor Company of Iwata, Japan. Today, approximately 2,400 Yamaha RMAX helicopters spray Japanese rice fields with pesticides or are used for planting, managing weeds and fertilizing.

In the heart of California’s Napa Valley wine-growing region, the University of California at Davis is testing Yamaha’s RMAX to fertilize its Oakville Station vineyard, thanks to special FAA clearance and restrictions. Such drone applications are ideal for agricultural sites where inclines are too steep for tractors, tight valleys that are unsafe for fixed-wing aircraft, or where the powerful downwash from standard helicopter blades can damage crops.

In the field of conservation, Conservation Drones, a global nonprofit group, uses drones to survey the rainforests of northern Sumatra (Indonesia) for orangutans’ nesting spots. The alternative – scientists walking with heavy equipment through the forests to document their findings – is more time-consuming, labor-intensive and costly than using drones. The images taken by the drones will help local organizations petition the government to protect national park land from developers who want to farm palm trees for oil.

Recently, both Google and Facebook have invested in drone technology to progress their mutual goal to enable Internet connectivity in underserved areas. In March 2014, Facebook acquired UK drone manufacturer Ascenta, whose aerospace engineers joined Facebook’s Connectivity Lab to focus on connectivity aircraft. A month later, Google acquired Titan Aerospace, a US startup founded in 2012 that makes high-altitude, solar-powered drones.

Drones are not limited to outdoor use, however. Qimarox, a material-handling company based in Harderwijk, the Netherlands, is studying the use of drones for picking goods off shelves and assembling them into pallet loads – inside a warehouse. The company envisions manufacturers of consumer products using drones to design a compact, flexible and scalable palletizing process.

“Because of capacity and ergonomic limitations, using people to stack goods on pallets is no longer an option for most manufacturers of fast-moving consumer goods,” said Jaco Hooijer, Qimarox’s operational manager. “Using drones, they can fully automate the palletizing process while retaining the much greater level of flexibility and scalability entailed when using people.”

US$82 billion

AUVSI projects the expansion of commercial drone technology could produce an economic impact in the USA alone of more than US$82 billion in the first decade of operation.



Despite the FAA’s recently published drone plan, experts concur that US regulators have a long way to go in developing safety regulations for the commercial use of small, unmanned aircraft. The current deadline for the FAA to produce safety regulations is August 2014, but recent testimony by the US Government Accountability Office and the US Department of Transportation’s Office of Inspector General indicate that deadline is unlikely to be met.

Until safety regulations are in place, commercial applications of drones are all but grounded in the US. In the meantime, the FAA is setting up six drone research and test sites from Alaska to Virginia, the first of which should be operational by July 2014.

“Initially, we see the commercial use of drones limited to visual line-of-sight in daytime operations only, so they can be landed quickly in case of emergency,” AUVSI’s Gielow said. “They’ll also be flying below the floor for general aviation, at no more than 400 feet (122 meters).” For now, that ceiling excludes Amazon’s plans. “Amazon’s parcel delivery concept not only requires a drone that operates autonomously via GPS,” Gielow said, “but one with the proper sensing technology to avoid collision with another drone or other aircraft, and that is all still very much in the research stage.”

Command-and-control considerations are among the more immediate and critical drone safety issues, with important concerns such as how line-of-sight pilots will control a drone if communication is lost. For example, will the drone continue to hover at its existing altitude, return to its take-off point, conduct a preplanned orbit, or simply land somewhere else? Experts also are focused on protecting communication links from hacking and other types of interference.


Autonomous, unmanned flying drones are just the first of many autonomous transportation concepts lined up to change everyday life.

“Technology will disrupt every facet of every job,” US venture capitalist Chamath Palihapitiya, founder of the Social+Capital Partnership, recently told McKinsey Global Institute. The autonomous vehicle concept – including both automobiles and drones – is a disruptive technology that could significantly impact nations’ gross domestic products, Palihapitiya believes.

“Can you imagine a fleet of small electric cars that delivers mail? A fleet of drones that drops off parcels right at your doorstep? A fleet of trucks that doesn’t cause traffic congestion? An entire fleet of vehicles that provides public transportation in a predictable way? All of these things have massive potential impacts to commerce and to mobility of individuals.” ◆

Conservation drones survey orangutans’ nesting spots:

Stay up to date

Receive monthly updates on content you won’t want to miss


Register here to receive a monthly update on our newest content.