Educating the first Internet generation poses new challenges
9 November 2012
4 min read
With their enthusiasm for consumer high-tech devices and social media, today’s students differ significantly from those of previous generations. Compass looks at what this new breed of connected learner offers to educational institutions and the business world . . . and what they expect in return.
The proliferation of consumer technology and social media has made the current generation more mobile and socially connected than ever before, changes that have considerable implications for both colleges and future employers.
“A different kind of person is emerging from higher education that can be called a global graduate, but most importantly he/she will be a self-evolving personality,” says Olga Kovbasyuk, president of the International Higher Education Teaching and Learning Association (HETL). “Therefore, he/she will have experiences in global learning studies, and is able to apply global knowledge and skills to effectively interact and collaborate with world cultures. He/she is more self and globally aware, and has multiple perspectives on the world issues and business.”
CATERING TO THE MODERN STUDENT
Young adults of Generation Y have never been more globally connected, with access to data and contacts anytime, anywhere. It is second nature for them to use technologies such as smartphones, tablets and game consoles to interact with various communities and social networks, including Facebook, Twitter and China’s equivalent, Weibo. Higher education institutions must keep up with this demand and so are incorporating these technologies into their curriculums.
“Students have to show us the way,” says Dr. Agnes Kukulska-Hulme, associate director for learning and teaching in the Institute of Educational Technology Professoriat at the UK’s Open University. “They are often ahead of ‘us’ in using the technology. We need to tap into their knowledge – not only about technology, but also about different ways of studying.” “Any major change in the way people communicate is bound to have major implications for education,” says Daniel Clark, program leader for the Bachelor of Science program in Leadership, Enterprise and Management at London’s BPP Business School. As such, students of the future will expect access to educational resources whenever, wherever they wish. “Some will have had many years of experience creating and sharing content, perhaps quite complex, perhaps to do with education,” Clark says. “Will they be happy to accept timetabled classes and sit through lectures?”
In “Social Media: Why It Matters to Everyone in Education,” Clark explains that the use of social media in education has changed in phases over time. “Phase One was when faculty started to use the potential of social media to support each other and for their personal and professional development,” Clark explains. The second phase dealt with how educators used social media to provide resources to each other and to students.
Phase Three, Clark says, which began recently, “is when students start to originate educational content.” For example, students of today are engaging in ‘social learning’ with blogs and peer-to-peer contacts over social media sites.
SOCIAL AND MOBILE LEARNING
So how can these students’ future employers benefit from their connectedness and social-media savvy? “These new ways of teaching and learning can improve learners’ intercultural communication competencies, which facilitate improved international relations and generate intercultural capital,” Kovbasyuk of HETL says. They also can raise students’ global self-awareness (see sidebar) and help students mature more quickly and fully.
Clark cites the example of Monica Rankin, a history lecturer at the University of Texas, who experimented with Twitter to increase the engagement in course discussions of students from a 90-person class. “I wanted to find a way to incorporate more student-centered learning techniques and involve the students more fully into the material,” Rankin says. Despite the fact that Twitter limits each ‘tweet’ to 140 characters, the experiment “encouraged students to engage who otherwise would not.” Using mobile technologies in and out of the classroom also gives students more flexibility to fit their studies around other activities, a trend that has implications for lifelong learning. “Mobile learning provides more flexibility in terms of time, place, and resources and can adapt to their lifestyle,” Kukulska-Hulme says. “Learners can be more actively engaged in determining what, when, and how to study, that is, choosing their activities and the time and place to perform them.”
GENERATION Y IN THE WORKPLACE
Just as students are pushing the adoption of new technologies in the classroom, they will expect similar – or better – levels of access in the workplace. “Androids, iPads, Google Docs, Dropbox – these and other technologies are everywhere in enterprises today,” Accenture states in “The Genie Is Out of the Bottle: Managing the Infiltration of Consumer IT into the Workforce,” published in 2011. “Often, (these devices) enter the workplace with employees, not under the company’s auspices,” the report says. “They may raise alarms, but they also present valuable opportunities to those who successfully harness them.”
Accenture surveyed more than 4,000 employees in 16 countries across five continents and found that employees believe the technologies they use enhanced innovation, productivity and job satisfaction. More than a quarter (27%) said that they would pay for their own devices and applications to use at work if the alternative was to do without.
To benefit from this enthusiasm for technology, some businesses are leveraging social media tools to build private networks that create tighter links with their employees while giving everyone improved visibility into activities across the organization.
For example, Miguel Zlot, the enterprise social networking evangelist at Molson Coors, introduced Yammer, a professional social media tool for enterprises, to the beer brewing and distribution company. “Not only is it a great way to stay connected with colleagues from different countries, but it also teaches me something new about our business every day,” Zlot says. “It could be a story about a new account from our sales team, an update on a marketing campaign that is taking off, or even a video of a new can line at work in one of our breweries.”
According to an Accenture survey, 27% of interviewed employees said that they would pay for their own devices and applications to use at work if the alternative was to do without.
Another company on the cutting edge of applying consumer technology to the workplace is internet corporation Yahoo!. When introducing the program Yahoo!. Smart Phones, Smart Fun!, CEO Marissa Mayer embraced the idea that the company’s employees must use the same devices as the company’s customers so they can understand how Yahoo!’s users think and work.
EMBRACING THE INEVITABLE
As globalization and technology continue to shape the future, businesses must strive to keep pace if they want to keep their current and future employees happy and take full advantage of their capabilities.
“IT consumerization will be one of the biggest tests for organizations in the next five years, but resisting it is simply not an option and is tantamount to capitulation,” says Jeanne Harris, executive research fellow and senior executive at the Accenture Institute for High Performance. “A good first step is to learn just how extensively consumer IT has embedded itself into your workforce. Consider how to manage the risks and opportunities, and experiment with ways to channel employees’ enthusiasm for consumer technology.”
Pat Henderson, the outspoken president of Hardstone Construction, defied industry tradition to apply 3D techniques pioneered in discrete manufacturing to the challenges of a commercial project. In the process, he proved that cost overruns are not a necessary evil of construction ... and that some risks are well worth taking.
Before he founded Hardstone Construction, a Las Vegas-based general contracting firm, Pat Henderson led $3 billion in projects at two of the largest U.S.-based construction companies. Despite 30 years of experience, however, certain aspects of the industry still puzzle him. For example, why does the industry accept 20% cost overruns as a normal part of doing business? And why do construction companies resist the 3D design technologies proven in countless other industries – technologies that could eliminate the overruns?
Getting answers to those questions is important to Henderson because he wants to leave his employees and his daughter, whom he is grooming to take over the company, a stronger, more profitable, and less frustrating industry than the one he has known. “I am convinced 3D has the power to eliminate the problems that abound in the construction industry,” the forthright Henderson said. “I believe it will reduce waste in construction by upwards of 10%. When you consider the trillions of dollars spent on construction in the U.S. alone, that is a very significant savings.”
A UNIQUE VISION
Henderson’s chance to test his theories finally arrived when Hardstone Construction was named general contractor for the multi-phased Tivoli Village mixed-use project in Las Vegas. With 2 million square feet of retail, office and parking space, the risks of delays and cost overruns were enormous – especially after the lead architects, structural engineer and mechanical/electrical/plumbing (MEP) engineer abandoned the project. The owner subsequently asked Hardstone Construction to pick up their duties, in addition to the company’s original construction coordination assignment.
On the Tivoli Village project in Las Vegas, Hardstone managed in-house coordination of all trades and brought the $300 million project in on time with zero dollars in contractor or subcontractor claims.
The daunting challenge was also an opportunity. If Hardstone Construction could use the same advanced 3D technology that has transformed the discrete manufacturing industries to salvage the Tivoli Village project, Henderson knew he could prove his point beyond a doubt.
A CALCULATED GAMBLE PAYS OFF
Henderson believed that creating a highly accurate 3D model of a virtual Tivoli Village would allow his team to recognize and eliminate risks with low-cost bits and bytes rather than high-cost physical materials. By improving coordination, Henderson also bet that 3D modeling would make the workplace smarter, safer and more efficient by enabling all stakeholders to collaborate more effectively.
By the time the first phase of Tivoli Village opened in 2011, Henderson had his proof. Hardstone managed in-house coordination of all trades and brought the $300 million project in on time with zero dollars in contractor or subcontractor claims. Henderson estimates savings totaled between $500,000 and $1 million in potential framing cost overruns alone, and between $2 million and $3 million overall.
3D COORDINATION ENABLES LEAN CONSTRUCTION
Because construction industry experience with the advanced 3D application he chose is limited, Henderson relied on a diverse trio of 3D modeling experts on the Tivoli Village project: Patrick L’Heureux, an expert in aerospace technical construction who previously worked for Pratt & Whitney; Nicolas Cantin, a mechanical engineer who previously worked for Bombardier; and Becher Neme, an architect and urban designer who previously worked for renowned architect Frank Gehry. Working as senior project members with a small support staff, the three-man team modeled the entire architectural envelope, structure, and MEP systems in-house. The team also produced highly coordinated shop drawings for the construction teams directly from the linked 3D models.
A PRECISE MODEL POWERS PRECISE COORDINATION
One key advantage on the project was that repetitive 3D geometry did not need to be modeled manually.
Instead, variations on individual components were generated by entering parameters – drawn from spreadsheets and design tables supplied by the architect – into basic component templates. “It was a truly unique advantage,” Cantin said. “These tasks would have been time-consuming and subject to high risk of human error if they were modeled manually. In fact, without the automation process, most builders would not model them in the first place, which could lead to mistakes, rework and cost overruns.”
Neme estimates that repeated iterations between initial design and final shop drawing production allowed Hardstone to optimize the MEP routing to reduce materials by 30%. “That is good for the budget, but also for the environment,” Neme said. “Because we order everything to fit based on the model, there is no waste.” On a framing budget of $5 million, change orders might easily add 20% or more – an additional $1 million. At Tivoli Village, the cost for change orders was zero. “We were obsessed with finding ways to apply manufacturing processes to construction,” Cantin said.
STREAMLINED WORKFLOWS END CONFLICTS
Because construction workers on the Tivoli project could see the models in 3D, they easily understood exactly how different systems came together, the order in which they needed to be installed, and the importance of doing their work in ways that left room for the next trade’s installations. “There was practically no time wasted on resolving conflicts between different trades on-site during construction,” L’Heureux said. “We simply didn’t have conflicts.”
With Phase 1 of the $300 million project complete, Henderson is so convinced that the application has the potential to transform the construction industry that he has arranged for his daughter to learn the program. “She is going to be in this business long after I am,” Henderson said. “I want her to have the best solutions at her fingertips. I am convinced it should be the future of this industry.”
The German Bundestag houses more than the country’s parliamentarians. It is also home to roughly 4,000 works of art by modern and contemporary artists. As part of an ambitious 3D project, the collection is set to be digitized and made accessible to the world via the Internet.
Georg Baselitz, Pablo Picasso, Ernst Ludwig Kirchner and Neo Rauch – these are just some of the illustrious artists whose pictures and sculptures adorn the walls, offices and corridors of the German Parliament buildings. For decades, 2% of the funding set aside for construction and maintenance of the parliamentary complex has been spent to buy art to decorate it.
For generations, members of Parliament (MPs) and official visitors have been able to marvel at leading artwork from a number of eras, but the rest of the world could not. That is about to change.
“We have some wonderful and in some cases very valuable pictures in the German Parliament buildings,” says Siegfried Kauder, Chairman of the Committee on Legal Affairs of the German Parliament and member of the Arts Council and the man behind a project to put the entire art collection online in 3D. “Often, the works funded by the taxpayer are displayed in places that are inaccessible to the general public. I wanted to do something about this and, together with media arts expert Martin Zimmermann, came up with the idea of displaying them in 3D animated form on the Internet for everyone to see.”
GERMANY’S ART GOES GLOBAL
Decisions about which works of art the German Parliament buys are made by the Arts Council, a nine-person committee made up of representatives of all five political parties currently sitting in the Parliament. The works are distributed over several buildings, including the iconic Reichstag. They are hung on the walls of the MPs’ offices as well as in corridors and niches. They grace inner courtyards and hang loftily as floating installations. Reliefs of the artists Gerhard Richter and Sigmar Polke stand out from the walls, and every MP passes Joseph Beuys’s “Table with Accumulator, 1958/85” each day before they enter the chamber.
Making these treasures available to the general public via the Internet is a daunting challenge. Every picture, sculpture and installation must either be photographed from all sides or recorded using 3D scanners. These images are then converted into digital, 3D data using computer software that allows each work of art to be realistically visualized on a computer screen. Labels and annotations also will be digitized, presenting visitors with beautiful artwork along with information about the work and its creator.
“3D can illuminate the entire surroundings of an object of artistic significance.”
Siegfried Kauder Chairman of the Committee on Legal Affairs of the German Parliament and member of the Arts Council
INSIDE THE CHANCELLOR’S OFFICE
Soon, the Parliamentary website will display artworks that few people ever get to see, including those displayed in German Chancellor Angela Merkel’s office. Merkel is well known for drawing inspiration from art, including a portrait of the first German Chancellor Konrad Adenauer, and from a picture of Catherine the Great that stands on her desk. A painting by Emil Nolde, which Merkel looks upon from her chair, is of a giant wave and bears the title “The Breaker,” which some might consider symbolic of the Chancellor’s resolve. Arguably the most significant work of art in the Chancellor’s office is the monumental iron sculpture entitled “Berlin,” by Basque sculptor Eduardo Chillida. With its almost-touching arms, the sculpture, which is 5.5 m (18 feet) tall and weighs 87.5 tons, invokes associations such as rapprochement, division and unification, making it an appropriate political symbol.
Also of incalculable historic value is the Berlin Wall memorial, which has been moved to the Marie-Elisabeth-Lüders-Haus. In view of its tremendous significance for German history, the Arts Council decided that the wall memorial would launch the 3D project. lt has been online since November 9, 2011. This is extremely gratifying for Kauder, who believes that 3D technology offers enormous possibilities not only for displaying art, but for actually communicating it. “3D can illuminate the entire surroundings of an object of artistic significance,” Kauder says. “Often, there are books, sets of pictures and lithographs that we have also bought but which would be hard to display due to their sensitivity to light. Thanks to modern 3D animations, some of the things that would previously have been impossible are now becoming possible.”
Before the 3D project, the government tried several approaches to displaying the parliamentary art collection. One is the “Art Room,” an exhibition directly on the banks of the Spree river that is open to everyone. The Berlin Wall memorial is located on the same promenade and is now accessible via the virtual project as well. Art and architecture tours through the generally accessible buildings are free and easily enjoyed by Berlin’s residents. Non-residents, art lovers and school groups, however, have only been able to enjoy these artistic treasures by journeying to the German capital.
Thanks to the 3D project, the collection will be easily accessible with a click of the mouse from a home or classroom computer. And while only a handful of people ever enter the Chancellor’s office in person, the same artwork that inspires Angela Merkel will now be available to the world.
Stroke survivors are routinely told that a large majority of their improvement from physical therapy will come in the first six weeks after their stroke, with almost no improvement after six months. Millions who have passed the six-month limit live without hope of improvement. But now those rules are being rewritten.
A joint research team from the University of Medicine and Dentistry of New Jersey (UMDNJ) and the New Jersey Institute of Technology (NJIT) is combining an immersive virtual reality (VR) environment with robots to shatter the limits of post-stroke rehabilitation. The study is funded by grants from the U.S. National Institutes of Health to Dr. Sergei Adamovich (NJIT) and Dr. Alma Merians (UMDNJ). “We’re helping patients make significant improvement even several years after a stroke,” explains Dr. Gerard Fluet, assistant professor at UMDNJ.
Previous research indicated that patients need to work at high-intensity levels to benefit most from therapy; the combination of a VR environment and robots is designed to help patients work longer and harder than they could on their own — three hours a day, four days a week for two intense weeks. The VR environment combats boredom by offering variety, while the robot combats fatigue by helping patients complete motions they otherwise could not. Together, they create a stronger sense of accomplishment, giving patients the will to persevere.
MORE MOVEMENT, MORE HEALING
In the VR world, patients might be asked to move a cup from a shelf and place it on a table, catch a ball or use a hammer. The therapist sets the difficulty of the task to optimize the challenge. If the patient completes a task successfully, the object they are working with shrinks or gets further away for the next try, increasing the difficulty; if the patient fails, the object grows or moves closer, making the activity a bit easier.
ADDING ROBOTS SPEEDS PROGRESS
“It helps patients work at the top of their abilities,” explains Dr. Qinyin Qiu, a research engineer who helped program the VR environment. “As a computer scientist, I find it amazing to see virtual reality games play such a positive role in people’s lives.”
Robots measure what part of the task a patient has completed, how much force the patient has exerted, and provide the additional force needed to complete the movement, if necessary. That helps patients translate even small movements—simply wiggling a finger, for example—into meaningful movements.
“Meaningful movement has a long-term impact on the way the brain heals,” Fluet says. “We’re not only working toward improving rehabilitation, but gaining insight into how the brain controls movement.”
Before-and-after brain imaging of the patients actually shows new neural connections being made in response to training. “It’s exciting to see those connections occur, especially over such a short period of time,” Fluet says.
“When therapists tell patients, ‘we don’t think you’re going to improve anymore,’ we feel we’ve failed them,” he continues. But the new research proves improvement is possible years after a stroke. “Now we can tell patients, ‘Keep trying. There’s still hope.’”
The power of social media in the consumer packaged goods industry
3 min read
David McCarty is IBM’s portfolio director for Consumer Products Industry Solutions. In this role, he is responsible for IBM’s Consumer Products strategy for developing and deploying solutions to solve industry-specific issues. McCarty has nearly 25 years of experience in consulting and implementing technology solutions in new product development, trade promotions, and supply chain management.
Compass: What opportunities do you believe social media offers to today’s consumer packaged goods (CPG) companies?
D. MCCARTY: A recent IBM study of more than 1,700 chief marketing officers (CMOs) reveals that most CMOs feel underprepared to manage the impact of key changes in the marketing arena. While 82% say they plan to increase their use of social media over the next three to five years, only 26% are currently tracking blogs to shape their marketing strategies. But even at this early stage of development, a leading industry analyst estimates, companies can cut their market intelligence spend by 20% to 40% and reduce traditional advertising costs by 40% to 60% using social media-based methods. The CPG industry is definitely in a transformational period with the rise of the “empowered consumer.”
Social, mobile and localization technologies are providing consumer products manufacturers with an unparalleled opportunity to create a relevant, direct connection with their consumers. Social technologies are an effective way to reach consumers and maintain contact for marketing and brand engagement, as well as for fostering 1:1 engagement.
We read a lot about CPG companies shifting marketing dollars from traditional advertising to digital. For example, P&G announced earlier this year that it plans to cut $1 billion from its marketing budget by 2016, in large part by leaning more heavily on lower-cost digital marketing.
But while we hear a lot about social media in relation to marketing, there doesn’t seem to really be quite the same level of discussion around new product development (NPD). There are some really good use cases out there, though. Social media technologies can help companies gather critical insights that can be used in product development, and consumer product companies definitely need to utilize social as an efficient input into NPD. Social media offers a way for companies to generate new ideas and innovations from the ground up. IBM’s 2012 CEO study found that a majority of CEOs were interested in building open and collaborative work environments.
What can innovative companies do in this arena?
D.M.: The NPD process certainly provides a very compelling opportunity to drive improvements. It’s a very interesting area, given the many challenges to be successful in the marketplace for new products. Social media can provide a means to better understand the needs of consumers, to quickly get a heads-up on untapped needs, to identify up-and-coming markets and category segments, and to test ideas in a rapid and cost-effective manner. Using social media in new product development can also help companies reduce time to market and the cost to develop new products.
How are companies using social media to develop more relevant products?
D.M.: Vitamin Water is a good example. Its flavor “Connect” was developed by the company’s Facebook fanbase. More than 2 million Vitamin Water Facebook fans participated in the online contest, and one Facebook fan reaped $5,000 for her role in the process. The competition asked fans to develop all aspects of the product, from selecting the flavor to designing the packing and naming the product.
of CMOs say they plan to increase their use of social media over the next three to five years. IBM study
You mentioned using social media to help reduce the time to market and/or cost to develop new products. Do you know of any CPG companies that have tried it?
D.M.: Industry studies have stated that engaging directly with consumers on social platforms to observe what they say about products and features can cost as little as one-fifth as much as conventional research using focus groups or surveys. There are some great examples in the public domain of how companies can use input from social technology to test product ideas, generate ideas externally through crowdsourcing, and bring them to market faster. One good one comes from Kraft. The company formed an online community that included 150 opinion leaders in health and nutrition, along with 150 consumers struggling with weight loss. While observing online conversations, Kraft found women had trouble maintaining their diets throughout the day and wanted packaged foods that conformed to their diet’s requirements for meals and snacks around the clock. As a result, the South Beach line of products was developed in 16 months, a significantly shorter time than for traditional development.
What should companies interested in using social media do?
D.M.: CPG companies interested in social media shouldn’t be afraid to experiment or think outside the box. Conferences can provide insight on best practices. Over time, however, companies will eventually need to develop an enterprise-level approach as social media becomes the mainstream.
CPG companies turn to social media for new product introduction
4 min read
Social media started out as a way to chat and share with friends, but has since evolved into a method of putting consumers at the heart of innovation. For the Consumer Packaged Goods (CPG) industry, where launching new products is the norm, social media offers a unique view into the psyche of consumers and new ways for companies to interact and collaborate with their customers.
Consumer Packaged Goods (CPG) companies depend on a nearly continuous stream of product enhancements, reformulations, brand extensions and breakthrough innovation to drive brand loyalty and sales.
“A CPG company often generates one-third of annual revenues from products that have been on the market for one year or less, so setting product requirements and successfully launching new iterations are critically important,” states the McKinsey Global Institute report, “The Social Economy.”
According to Symphony IRI, more than 80% of new product introductions fail in the first year and only 3% reach sales of $50 million in that same period. So if CPG companies could get ahead of the curve and better anticipate what consumers want, or better yet, enlist their customers’ help in brainstorming and designing new products, perhaps the success rate of new introductions could be improved.
CHANGING THE CONVERSATION
Many CPG companies are finding that the answer lies in social media. Empowered consumers are changing – and taking control of – the buying decision process with a new voice. Conversations and word-of-mouth have overcome “push” marketing in forming people’s opinions of what to buy. Research house Booz and Company says social media is quickly replacing broadcast media as the primary way people learn about products and services. In fact, 70% of consumers now say they look at product reviews before making a purchase and nearly 80% report using a smartphone to help with shopping.
Companies are joining the conversation with consumers to better understand, co-create and personalize new products and services to meet their needs. “Social networking and mobile applications are increasingly becoming part of our customers’ day-to-day lives globally, influencing how they think about shopping,” said Eduardo Castro-Wright, vice chairman at Wal-Mart in a statement about the company’s purchase of Kosmix, a social media start-up. Kosmix is now called @Walmart Labs and examines Twitter and Facebook posts and search terms on Walmart.com to help the big-box retailer measure interest in new or existing products.
HARNESSING THE POWER OF THE CROWD
Many companies are beginning to use social platforms to harness the power of the “crowd” to help them concept and design new products. Frito Lay is one example. The company recently set the Guinness World Record on Facebook with 1.5 million new “likes” in 24 hours while using the social medium to promote its line of natural snacks. And now, using its own Lay’s “Do Us a Flavor” contest app, the company is soliciting input from consumers on new flavors for its chips. Frito Lay will produce the three winning flavors and the top new chip will earn its creator a $1 million prize.
Last year, Fiat did something similar with a new car design. The Italian carmaker invited customers and car enthusiasts to help develop the Fiat Mio online in real time. In the 18 months the project has been underway, more than 2,5 million unique visitors have participated and logged nearly 20,000 comments and ideas on the design, ranging from wheels that rotate 90 degrees to ease parallel parking to vehicle-to-vehicle communication to avoid collisions. It is one of the world’s first collaborative cars.
Early this year Samuel Adams, the Boston-based beer brewer, teamed up with social media enthusiast Guy Kawasaki for the first Samuel Adams Crowd Craft Project to create a crowd-sourced beer. Using an interactive Facebook application, the company used its fan base to weigh in on all aspects of a beer, from color and clarity to the flavor profile. The result, an American red ale called B’Austin Ale, debuted in March.
“I’m a total believer in crowd sourcing,” Kawasaki says. “it brings great minds together that might not collaborate otherwise.”
More than 80% of new product introductions fail in Year One Symphony IRI 2011 New Product Pacesetters Report
What works for product development also applies to packaging. A few years ago, organic dairy products maker Stonyfield Farms asked its virtual community to help redesign its packaging. Nearly 40,000 of the brand’s fans offered input on shapes, colors and even campaigned to keep Stonyfield’s mascot, Gurt the cow, on the packaging.
According to chiefmarketer.com, Uniball pens has jumped in with both feet, migrating all of its spending on broadcast media to a social and digital program, supported by an overarching national retail promotion. When the company felt it could no longer effectively reach its primary target audience – males 15 to 34 – through broadcast media, it produced three slapstick humor videos for Facebook and doubled its fan base to 23,000 within the first few days of the campaign.
But do such ventures pay off? Rusty Snow, vice president and general manager at Uniball, says it does. “We feel we get much more targeted promotion and better benefit for our spend that is spread out for a longer period of time in more specific places for our people to see it,” he says.
The McKinsey Global Institute thinks so, too. The three largest potential sources of value from social media – marketing, product development and enterprise collaboration – could generate as much as $300 billion in potential sales over the next ten years in the CPG sector, the group estimates. That equates to potential productivity increases of between 0.6% to 0.9% per year.
According to David McCarty, Consumer Products Industry Solutions Portfolio Director for IBM, 3D promises to change the playing field dramatically, inside the enterprise and out. “Online virtual environments in which employees, suppliers and consumers can work together to turn new ideas into reality are an exciting new aspect of social media,” he says. “Product design and simulation can now be done via the cloud even with handheld devices such as smartphones. Such widespread access to virtual 3D environments means designers and engineers can work on a product and share ideas with others from anywhere and everywhere – and that brings the potential impact of ‘social’ to an entirely new level.”
“The possibilities are endless, but new product development is becoming one of the hottest areas in which CPG companies are exploring the potential impacts social media might have,” McCarty says. “Companies that are breaking ground in this area are likely to be at a competitive advantage for some time until their counterparts catch on and catch up.”
With more than 18 years of experience in high-performance computing and scientific software development, Hakizumwami Birali Runesha, director of Research Computing in the Office of the Vice President for Research and National Laboratories at the University of Chicago, is passionate about high performance-computing (HPC). Trained as a civil-structural engineer, Runesha’s keen interest in applying simulation technology and HPC to life science challenges was honed in his previous post as director of Scientific Computing and Applications at the University of Minnesota Supercomputing Institute.
Compass: How are simulation and high performance computing impacting design and product development in the life sciences industry today?
H.B. RUNESHA: Over the past five years or so, I’ve been interested in the role high-performance computing (HPC) could play in the life sciences, in particular for the design of medical devices. Minnesota has a very large concentration of companies that produce medical technology products and, along with some of my colleagues in Minnesota, we had a vision to enable simulation-based engineering for the design and optimization of medical devices. With the advance of computer hardware, magnetic resonance imaging (MRI) and other imaging technologies, it is becoming easy to do 3D-reconstructions, and therefore paving the way to the ability to use patient-specific approaches that will allow you to improve on the design of the devices. Using simulation, you can go through thousands and thousands of parametric studies and then refine them before you ever do a prototype. With my background in civil-structural engineering, I was drawn in by what aerospace and automotive companies have accomplished using simulation for the design of airplanes and automobiles respectively. The same principles can be applied to the life sciences. In the years to come, you’re going to see simulation and the use of HPC play a big role in product development for the design and optimization of medical devices.
I believe simulation will play a role in all aspects of the life science product design.
What role will simulation play in efforts such as the U.S. Food & Drug Administration’s (FDA’s) Innovation Pathway initiative to get higher-quality, safer products into patients’ hands faster?
H.B.R: It takes a very long time for a product to finish the whole approval process. One would argue that if we can reduce that time, we will bring product to market much faster, otherwise we are really losing our competitive edge with other countries. With advances in algorithm development and computer hardware, you can perform high-resolution simulations that provide you with good results. There is still work to be done to validate many models, however there is a great deal in the process itself that with some of the tools that have already been validated, we can start thinking about how can we improve the regulatory process by involving simulation, rather than simply experimental approaches. Computation is now a third pillar of science, next to experiment and theory, and starting to really establish itself as a reliable avenue.
If companies can agree on a basic framework that is compliant to what the FDA is looking for, we can start cutting down some of those steps. The big roadblock is the culture of change. How do you get these companies to start using HPC? But the potential is huge.
What connection do you see between simulation and innovation?
H.B.R: When you’re given a tool where you can make mistakes … brainstorm and try out … you become more inventive. If you’re an engineer, your boss can’t give you a million dollars to try an experiment every time you have a new idea. But on your simulator software, you can test if your idea makes sense without spending all that money. That’s the whole core of innovation, answering the ‘What if’ questions. Can you try things? Can you test things? Can you afford to be bold with your hypotheses? By trying, you discover more questions ... you can experience more. All of that is really critical.
Discover more about the Dassault Systèmes' solutions for Life Sciences
Main image: Hakizumwami Birali Runesha Director of Research Computing in the Office of the Vice President for Research and National Laboratories at the University of Chicago
Ensuring medical device safety while accelerating innovation
5 min read
HIGH-STAKES BALANCING ACTWith today’s aging global population, consumer expectations for better healthcare and advances in technology are high. Governments around the world are under intense pressure to deliver on citizens’ demands. The Innovation Pathway is the US Food & Drug Administration’s strategy for delivering timely access to new technologies without compromising patient safety.
The United States Food and Drug Administration’s (FDA) Center for Devices and Radiological Health (CDRH) supports the notion that regulatory compliance and innovation must become complementary rather than conflicting processes. To achieve this goal, the center launched an initiative in 2011 to “encourage innovation, streamline regulatory and scientific device evaluation and expedite the delivery of novel, important, safe and effective innovative medical devices to patients.”
This Innovation Initiative proposes actions CDRH could take to help accelerate the introduction and reduce the cost of development and regulatory evaluation of innovative new medical devices. Chief among them, according to Megan Moynahan, program director at CRDH, is establishment of the Innovation Pathway, a priority review program for medical devices.
Moynahan is the first to point out that the FDA has been criticized by those who said its regulatory process was preventing innovation in medical devices. The FDA’s processes, she says, were blamed for taking too long, increasing costs, and driving innovators out of the U.S. to conduct clinical trials and launch new products.
“We started to embark on a way to streamline the regulatory process,” Moynahan says. The Innovation Pathway is designed to shorten the overall time and cost of development, assessment and review of medical devices and to improve the way FDA staff and innovators work together. “By engaging with innovators much earlier and more collaboratively, we believe we can reduce the time and cost of the entire process for bringing safe and effective technologies to patients more quickly,” she says. “We believe that if we can work with companies earlier in development, we can work to reduce some of the regulatory hurdles that might occur down the road … kind of get out in front of them before they happen.”
The Innovation Pathway was launched in February 2011 with just one applicant as a test case – a Johns Hopkins Applied Physics Lab neuro-prosthetic arm project. “We wanted to engage with them differently, to work more collaboratively” Moynahan says. “We had some modest successes, but it was a very complex, very rough test case.”
In the summer of 2011, the White House Office of Science and Technology Policy encouraged government departments to pilot a new program called “Entrepreneurs in Residence” that would bring outside experts into government to tackle challenging problems. The idea was quickly adopted by FDA, and served to breathe new life into the Innovation Pathway initiative. “Entrepreneurs in Residence gave us a chance to bring outside people in to shape our thinking,” Moynahan says. “We gave them a goal to take our little nascent Innovation Pathway program and bring it to the next level – Innovation Pathway 2.0.”
Among the outside thinkers was Dr. Thomas Fogarty, often called the “Edison of Medicine.” Fogarty founded the Fogarty Institute for Innovation to foster the development of new medical technologies and give entrepreneurial innovators the tools they need to bring new medical therapies to market. He is an internationally recognized cardiovascular surgeon, inventor and entrepreneur who holds 135 surgical patents, including the Fogarty balloon catheter, considered the industry standard.
“Although I’d always been a staunch critic of the FDA, they reached out to me and asked me to become a consultant to the agency,” Fogarty recalls. “To me it was huge. It indicated a willingness to change, which was very, very positive. It’s been a great experience.”
But change doesn’t always come easily. “The big challenge is to be persistent – to understand why change is needed,” he says. “I think over the years many at the FDA haven’t really appreciated the urgency in speeding the process. But ‘death by delay’ is the worst thing a physician or patient can experience.”
Fogarty believes the Innovation Pathway is a strong sign that attitudes at the FDA are changing. “The FDA, at all levels, is beginning to understand,” he says. “They’re reaching out. They’re listening. The Entrepreneurs in Residence program has been a huge help in opening their eyes to what’s going on out there on the other side of the equation.”
Innovation Pathway 2.0 was launched in April 2012. “We’ve built in a collaboration period, which is very unstructured,” Moynahan explains. “It is a time when the company and the FDA work to create a shared vision of the success of a product – everything from what it’s going to look like technologically to how the pace will be managed, and even interactions with some of our sister organizations to establish things like reimbursement plans. The goal is to create a roadmap for what the company will be doing going forward.”
Moynahan says the team “concepted” the idea, but needed to test it. “We didn’t just want to do our typical thing where we stand some new program up in concrete and put out a bunch of Federal Register notices. We wanted to do something more creative, so we ran the End Stage Renal Disease Innovation Challenge in January 2012 with multiple purposes. We wanted to shine a spotlight on a patient population that is highly dependent on medical devices, but for whom there’s been little improvement in treatment technologies over the past several decades. So we offered innovators in this area a chance to be put on the Innovation Pathway.”
Three groups, selected from 32 applicants, are moving through the newly built collaboration period, which Moynahan says is still a work in progress. “The pathway is being built on the fly as the three companies move through the process and provide feedback. It’s a completely unheard-of method within a government agency. We’re trying to model different ways of doing what we normally do. We’re taking feedback from each company and improving the process for the next one. It isn’t about checking boxes and following a process; it’s about figuring out what the moment requires and delivering on that promise in the next round. Each company strengthens it for subsequent ones.”
BREAKING NEW GROUND
Moynahan says that while there is a great deal of enthusiasm for the idea that the FDA would even try to streamline its processes, the other goal is to transform the experience of working with the agency. “People are looking for more engagement and collaboration. The Innovation Pathway is an appealing concept, particularly for innovators and small companies who’ve maybe never worked with the FDA before. The goal is to create a better experience on both sides – for the companies and for our staff.”
Moynahan says the FDA team is excited and the the program is getting significant internal support from multiple layers of the agency’s management. “Our front line and middle management is often times a hard group to win over,” Moynahan says. “But they’re not just bought into this; they’re champions for it. They’re very excited about having a sense of ownership in what happens next. We’re taking the constructive criticism of our current participants and working to make it even better.”
The ultimate impact of the Innovation Pathway remains to be seen, but Fogarty believes a departure from the sometimes combative relationships between developers and the FDA that existed in the past is a positive step forward. “We have the same objectives,” he says. “We want to deliver better care to patients with safety and efficacy. We have to work together, and the Innovation Pathway is taking us in the right direction.”
Discover more about the Dassault Systèmes' solutions for Life Sciences
Leveraging modularity in the industrial equipment industry
4 min read
Modularity allows manufacturers to deliver a highly diverse product line while avoiding the complexity of engineered-to-order processes. For more than 15 years, Modular Management has helped industrial equipment (IE) companies apply modularity to their product development. With interest in modularity on the rise, Compass spoke with Alex von Yxkull, president and CEO of Modular Management, and Johan Källgren, partner, on the challenges of modularization.
Compass: What are the major trends for companies in the IE industry?
A. VON YXKULL: Over the years, demand for diversity has become greater and greater. Everyone wants to have their unique feature or unique design, and this has created a lot of complexity for companies. The trend we are seeing is that IE companies put too many engineering hours into developing each individual product. Providing customers with unique products reduces their margin, increases costs, and eventually consumes their profits. The complexity created by this engineered-to-order type of approach is costly not only for engineering, but for purchasing, quality management, and other peripheral activities. It negatively affects a company’s potential.
J. KÄLLGREN: Another challenge for IE companies is to get into their client’s process as early as possible so that they can suggest design improvements or convince the client to use their type of equipment. In this way, lead times go down for everyone, and customers obtain a more cost-effective price than if everything is specifically tailored for them.
Since these modules are predefined, IE companies can bring this to the table early during their customer’s specification process and spend more time on innovation and quality improvements. From a competitive point of view, this is a huge step. It is a game changer that propels IE companies into another league.
By developing a modular architecture, MTS Systems Corporation reduced overall part numbers by 90%.
Which companies are not good candidates for modularity?
J.K: Manufacturing companies across the whole scale from one-off to mass production can benefit from modularity. Examples range from ABB Power Systems, which builds one-off power plants, to Whirlpool, which manufactures 100,000-plus of a model. However, modularity can greatly benefit companies where there is a level of complexity or a higher rate of innovation in the product assortment. Many IE companies fall into this category, which makes them ideal candidates for modularization.
We see many examples of companies that have made spectacular improvements thanks to modularity. One example at MTS Systems Corporation, producers of mechanical test equipment for the automotive, aerospace, construction and biomedical industries, involves the design of their servo hydraulic load frames. By developing a modular architecture for the frames, they reduced overall part numbers by 90%, from 11,000 unique part numbers required to build 150 unique product variants to only 800 unique part numbers required to build more than 100,000 unique product variants. It’s a marvel of design efficiency.
Why all the recent talk about modularity? Why has this not been more prevalent in the past?
A.v.Y: Even though some companies, like Volkswagen, embraced modularity more than 20 years ago, implementing modular architectures has only recently become a more accepted strategy. People are more confident with this approach because they see that it brings results. Why has this not been more prevalent in the past? Because “modularization” could be a long and difficult journey if not done in the right way. Companies must have a long-term view of their business and make big investments. They need to engage all their key cross-functional players to reap the full benefits of modularity. It takes managerial courage and a bold management team to make this decision in year zero and to have to wait until year two or three to see any benefits.
What should a company that is thinking of going modular take into consideration before making a decision? What are the key success factors?
A.v.Y: To begin with, companies must have a very clear vision of where they want to go and how modularity can help with that. Leadership needs to understand the cross-functional monetary value in the supply chain, in R&D, in sales, and top management and build a real strategy around modularity. The decision to go modular has to be anchored at the very top of the corporate ladder. This is not something that should be left in the hands of R&D. R&D can design it, but they cannot define it themselves.
The second success factor is to understand your customer. If you start from a technical point of view – in R&D for example – you cannot reach the market with the right offering. If companies do not understand how to build this configurable architecture in the right way, it can lead them down the wrong path.
What showstoppers do OEMs need to address when speaking to IE prospects about going modular?
J.K: The shift to modularization can be difficult. Top management needs to understand the value of going modular. They have to dedicate resources within their organization to execute a program like this. Without this resource commitment, it would be inadvisable to continue.
What are the foundations for a successful partnership between a customer and a modular solution consultant?
J.K: The consulting company is a facilitator that helps the client make the transition to modularization. There has to be mutual trust. Another aspect is executive sponsorship: having a voice that keeps top management informed, addressing questions on a daily basis, showing the results of modularization as things progress, and laying the foundation for subsequent phases. Having executive sponsorship helps keep the momentum to continue this journey to successful completion. So I guess we can sum it up in three words: commitment, sponsorship and trust. These are the keys to success. a sustainable success as companies having established modular architectures typically enjoy much higher profitability than the competition for many years.
Touch screen makers poised to make sci fi come to life
9 min read
The digital world is poised to become more immersive than ever. From super touch screens that recognize multiple users and objects, to those that can shape-shift to create physical buttons and give users the sensation of textures, a number of innovators around the world are pushing the boundaries of touch-screen development.
In January 2012, next-generation haptic technology developer Senseg unveiled the first production-ready product that turns touch screens into “feel screens.” Using electrically generated force fields, Senseg’s patented Tixel technology mimics the feel of physical textures, edges and contours on touch screens.
A “feel screen” gives a user the texture of cotton or silk as they browse clothes online, basalt or obsidian rocks as they research volcanoes, or leather versus velvet when purchasing a sofa on the Web. Feel can also guide people in how they use a device, allowing them either to minimize the visual focus required for accurate operation, or to enrich a multi-modal experience that incorporates graphics, sound and touch.
Senseg is just one of many innovators changing users’ experiences with computer and video screens. At SID Display Week 2012, Touch Revolution and Tactus Technology showcased a prototype Android tablet with a physical keyboard that rises from a flat touch screen. Using innovative microfluidic technology, a patented Tactile Layer component provides a next-generation haptic user interface with real physical buttons, guidelines, or shapes that rise out of the surface (and recede to invisibility) from any touch screen. Users can feel, press, and interact with these physical buttons just like they would use keys on a keyboard. When they are no longer needed, the buttons recede into the surface and become invisible. Because the Tactile Layer replaces another layer in the display stack, it adds no thickness to a standard touch screen display. For many, these innovations may seem impossibly futuristic, but they are all real and will soon be hitting the mainstream.
PREDICTING THE FUTURE
For decades, the storytellers of Hollywood have attempted to predict how our world may look in the future – giving us visions of intelligent robots, talking computers, 3D holographs, flying cars and more. It is surprising how often they get it right. In particular, visions of how we may interact with digital content in the future have proven particularly accurate. In the original Disney sci-fi film Tron produced in 1982, for example, the head of ENCOM communicates with the evil computer Master Control Program using a large tabletop computer touch screen, remarkably similar to the modern-day Samsung SUR40 with Microsoft PixelSense technology.
Twenty years later, in 2002, Minority Report captured the imagination of a new generation as Tom Cruise’s character pulled on a set of black gloves to deftly manipulate components, graphics and details on a wrap-around bank of transparent screens. While the 2054 depicted in the movie is still very much in the future, today’s high-tech industry is already delivering Minority Report-style technology. Millions of people worldwide, for example, are using Microsoft Kinect to play games on their Xbox 360s without a controller. Combining an RGB camera, depth sensor, multi-array microphone and custom processor running proprietary software, Kinect tracks full-body movements in 3D, recognizes facial expressions, and understands voice commands to create a new level of gaming experience. Although a product that combines both gesture and touch is not available yet on the open market, the industry does have the capabilities to create it.
The global market for touch screen modules in mobile devices will reach 1.3 billion units by 2018. Global Industry Analysts
As Richard Ebner, CEO of Austria-based touch screen developer isiQiri, explains, Hollywood’s science fiction ideas only become reality when there is a demand for their existence. “Today’s form factors are certainly differing from what we saw in Minority Report,” Ebner says. “There is no need for special gloves to handle the content, and the technology needs to be much more integrated into what we already have around us, like walls or furniture, because this is where the use cases are.
”While touch-screen development has accelerated in the past few years, the idea has been around for nearly five decades. Many believe that the first-ever touch screen was invented by E.A. Johnson, an employee of the Royal Radar Establishment in Malvern, UK, when he described his ideas for a capacitive touch screen in 1965.
With diagrams and photographs of a prototype, Johnson explained not only how the technology worked but also how air traffic controllers could use it by interacting directly with blips on their screens. Although this very early design was rudimentary and could only recognize one touch at a time, similar technology has made its way into the modern-day iPhone.
THE IPHONE GENERATION
Since the advent of Apple’s iPhone, the touch screen market has accelerated at an astonishing pace, spawning a new generation of start-ups keen to make their marks on the industry. Most leave marketing to the device manufacturers they supply, so chances are good that you don’t know the innovative companies’ names. Their products, however, are quickly infiltrating our workplaces and homes.
Touch Revolution, which was founded by industry pioneers in the touch-device market, including ex-Apple employee Mark Hamblin, is focused on delivering the iPhone experience on an even larger scale. “During my time at Apple, I was involved in a number of exciting projects, most recently as a senior product design engineer working specifically on the iPhone touch screen,” Hamblin explains. “After developing the iPhone and seeing how it was accepted in the market and how it transformed the smartphone industry, I knew immediately that this was something special and I wanted to concentrate on taking the touch-enabled experience even further.
” Before the iPhone was launched in 2007, touch screens were used almost exclusively by companies that could afford experimental trials of the technology. Despite gaining some early traction in the mobile phone space, the screens lacked responsiveness, and most devices could sense only one point of contact at a time. Today, high quality multi-touch is expected as standard in most mobile devices, and the technology’s application is set for exponential growth.
A recent Global Industry Analysts (GIA) study titled “Touch Screens in Mobile Devices: A Global Strategic Business Report,” projects the global market for touch-screen modules in mobile devices will reach 1.3 billion units by 2018, compared to estimated sales of 184.3 million units in 2009. Of that 1.3 billion, projected capacitive (p-cap) touch screens, which are used in the iPhone, represent the largest market for mobile devices, a share that is only expected to grow.
P-cap screens allow users to easily navigate and manipulate digital objects with their fingertips, without having to press or use a stylus. This capability has quickly made p-cap screens the technology of choice for most touch-screen devices, including smartphones, tablets, e-readers, global positioning units (GPUs), TVs and more. Although they cost a little more than other options, such as resistive capacitive screens, many believe the enhanced user experience easily justifies the price premium.
“Since the iPhone, user expectations have changed dramatically,” Hamblin says. “Everyone now expects most screens to be touch-enabled and perform like the iPhone does. If you think about it, it’s the most natural way to interact with digital content. Give a phone to a young child and they’ll immediately start touching the screen.”
Hamblin believes that, based on the usability factor alone, touch screens will soon take over many interfaces in the home and workplace. “Take white goods, for example,” Hamblin says. “We have a number of projects developing touch-screen interfaces for the likes of washing machines and microwaves.”
Projected capacitive (p-cap) touch screens allow users to navigate apps with their fingertips, making them the technology of choice for mobile devices.
For an OEM, the possibilities of touch are significant. “Instead of providing a certain number of buttons to operate the washing machine, a touch screen interface provides the user with multiple options,” Hamblin says. “You can completely reconfigure the interface for the mode it is in to suit the person using it. A teenager may only want a simple on/off function. A more experienced user may want access to advanced wash settings. Touch interfaces can provide that flexibility. You can also future-proof the interface. If an OEM wants to add new features and functionality to a microwave, for example, they just have to update the software.”
“Since the iPhone, everyone now expects most screens to be touch-enabled and perform like the iPhone does.”
Mark Hamblin Touch Revolution co-founder
INCREASING SCREEN SIZE
While Apple certainly deserves credit for kicking off the touch revolution, other leading technology vendors are having a profound influence as well. For example, Samsung’s range of Super AMOLED screens, with integrated touch capabilities and anti-glare technology, are popular in the smartphone market. Microsoft, despite heavy criticism for its sluggish response to the Apple iPad tablet, is hoping to make a big impact in the tablet market with its Microsoft Surface line of touch screen tablet PCs, due to hit the market this year.
In fact, Microsoft’s use of touch technology goes back a number of years. In May 2007, Microsoft became one of the first major technology companies to bring large-scale multi-touch computing to market in a commercially ready product. This tabletop offering has since evolved into the Samsung SUR40 with Microsoft PixelSense. The 360-degree, 10-cm (4-inch) thick product has a horizontal user interface that responds to touch, natural hand gestures, and real-world objects placed on the display, allowing users to interact with information and digital content in a simple and intuitive way.
MSNBC’s coverage of the 2008 presidential election featured a first generation of the Microsoft touch-screen table in action. MSNBC political director Chuck Todd used the screen during broadcasts to quickly and easily share information and analysis of the race leading up to the election. He analyzed polling and election results, viewed trends and demographic information, and explored county maps to determine voting patterns and predict outcomes – all with a flick of his finger. At the Rio iBar in Las Vegas, meanwhile, customers can create and order a signature cocktail by interacting with the touch screen at their table. They can also explore the surrounding area virtually, making friends and chatting with people seated at other units in the bar.
“Traditional computer interfaces are designed for individuals, but when people want to meet and work together face-to-face, computers can get in the way,” says Adam Bogue, president and founder of Circle Twelve, which recently was featured in the Gartner report “Cool Vendors in Imaging and Display Devices 2012.”
“Broad acceptance of multi-touch smartphones, and now tablets, sets the stage for growth in larger displays and tabletop computers for collaboration,” Bogue said. “As this happens, the feature of ‘multi-user’ – or knowing who is who – will be increasingly important.”
Ebner of isiQiri agrees that the future of touch screens lies in developing larger-sized devices that can recognize multiple users simultaneously. Such screens will allow multiple people to interact on a single screen.
“This trend is particularly being driven by the ever-falling prices of large LCD panels,” Ebner says. “Today you can buy systems that can register 30 or more simultaneous touches, but the demand for this type of scale doesn’t really exist. I think that the sweet spot will be for around four to eight touches, allowing two to four users to interact with a device at any one time – be it to look at a photo album and zoom in on pictures on a touch screen coffee table, access information on a touch screen information kiosk, or for a family to place an order in a restaurant using a touch-screen menu.”
“Broad acceptance of multi-touch smartphones, and now tablets, sets the stage for growth in larger displays and tabletop computers for collaboration.”
Adam Bogue President and Founder of Circle Twelve
New York City, for example, is conducting trials of smart, 81-cm (32-inch) touch screens, which have replaced approximately 250 payphones. The user-friendly screens look more attractive than the dated payphone and provide relevant information about local neighborhoods, listing nearby restaurants, stores, attractions, traffic updates and more. Built to be water- and dust-proof, the screens can be cleaned with a hose. If the pilot program goes well, all of the city’s 12,500 payphones could be replaced.
PUSHING THE BOUNDARIES
The increasing use of touch technology is creating demand for companies like Senseg, Touch Revolution and Tactus Technology to further enhance the user experience. In this spirit, they are working to bring the sense of touch to touch screens.
Senseg is a leader in the field of adding tactile experiences to touch screens. “We can enhance the visual content on a display with touch feedback that creates the sensation of moving a vinyl record on a DJ music app, feeling sand when accessing images of the Gobi Desert, or feeling the corner of a page when reading an e-book on a tablet,” says Ville Mäkinen, founder and CTO of Senseg. “We have created highly efficient solutions that provide the precise tactile sensations right at the location of the user’s finger without shaking the whole device, and yet consume very little power on mobile devices.”
Disney Research has developed a similar concept with REVEL. This augmented reality tactile technology allows Disney to change the feeling of real objects by augmenting them with virtual tactile textures. The virtual textures come from a device worn by the user. The device injects a weak electrical signal anywhere on the user’s body to create an oscillating electrical field around the user’s fingers. When the user moves their fingers over a surface, they feel the sensation of distinctive tactile textures.
Disney Research also is experimenting with the power of Microsoft’s gesture-based innovation Kinect, taking it beyond gaming scenarios into everyday life. Called Touché, the company’s capacitive-sensing technology can detect a variety of touch gestures applied to everyday objects. Researchers say it could be used to create smart doorknobs that unlock when grasped in a certain way, or allow tables and chairs to sense the position of people using them. It could also let users control their phones by touching their fingers together or tapping their palms.
Such functionality has numerous compelling applications. Although researchers are still investigating its full potential, they have already highlighted applications in gaming, adaptive environments, smart offices, in-vehicle interaction, and rehabilitation.
So when can we expect to see this type of functionality come to our computer screens, mobile devices and more? Senseg expects OEMs to integrate its technology into consumer products beginning in 2013. Tactus Technology and Touch Revolution’s Tactile Layer has also been well received in the industry, with gaming controls and navigation devices among their customers’ top requests. Initial products using the technology are expected to be released by mid-2013.
Collaborating to address the challenges of the pharmaceutical industry
3 min read
Created in 2010, the BioIntelligence Consortium unites pharmaceutical companies, software companies, and public research bodies to accelerate and collaboratively revise drug discovery and development processes using digital technology. Ipsen, a global specialty pharmaceutical company, is a founding member of the Consortium. On behalf of Compass, Patrick Johnson, Science & Corporate Research Vice President at Dassault Systèmes, spoke with Christophe Thurieau, Ipsen’s Senior Vice President of Scientific Affairs and President of Ipsen Innovation, for an update on the program’s progress.
COMPASS:The global pharmaceutical industry is facing many challenges. Can you give us an overview?
C. THURIEAU: The pharmaceutical industry is undergoing major transformations due to the current “innovation crisis.” R&D productivity is plummeting, as is the discovery of new molecules. A new external economic environment, pressure from paying institutions and regulatory bodies, and the growing market share of generic manufacturers are all taking a serious toll on the short- and long-term performance of pharmaceutical firms. This trend has intensified sharply over the past two years, driving industry players to make strategic changes to their R&D activities. As an example, this can be seen with a new model of collaboration, with external partners taking an increasingly important role all along the value chain.
How is Ipsen responding to these challenges?
C.T.: Over the past few years, Ipsen has been implementing a strategy to optimize its R&D, enabling it to grow its product portfolio in targeted therapeutic disciplines. Ipsen’s internal R&D efforts are also supported by the active pursuit of partnerships at every stage of the research cycle, from fundamental research to clinical development. Ipsen’s R&D staff, though top experts in their fields, represent just a small fraction of the expertise available globally in our specialty areas, making it imperative that we find synergies with other leaders at the cutting edge of medical and pharmaceutical R&D. The group has formed a number of major partnerships at the Research stage. We have been working with the prestigious Salk Institute (in La Jolla, California) on fundamental research since 2008. We have signed partnerships with innovative biotech companies including Syntaxin, Dicerna, Oncodesign and Active Biotech, gaining access to new, promising technologies for the discovery of new drug candidates. In the field of biomarkers and in vitro diagnostics, we have a framework agreement with bioMerieux, and in medical oncology Ipsen has partnered with the Institut de Cancérologie Gustave Roussy. Last but not least, we are interested in new approaches and disciplines, such as the BioIntelligence Consortium, to speed up the R&D process.
You mentioned the BioIntelligence Consortium. Can you tell us your vision of this program and how Ipsen is involved?
C.T.: The BioIntelligence Consortium and program developed from a strategic meeting between Dassault Systèmes (3DS) and Ipsen. The uses of virtual collaboration, modeling and simulation within the global PLM (Product Lifecycle Management) infrastructure developed by 3DS has enabled a deep transformation, which has been proven in dozens of other sectors, for handling complex subjects and shortening research, development and production timelines. 3DS considered the application of PLM to the life sciences industry to be a strategic priority, leveraging the assets and values developed for other sectors. The vision for this innovative program is that the power of the digital world can help to fundamentally transform the industry’s current practices. It quickly became apparent during our discussions, however, that applying these systems to the life sciences could only be achieved by collaborating with the leaders in public health and the healthcare industry. The idea of the program was born and the consortium began to take form. Ipsen is working on two projects as part of this program: first in oncology with the modeling and simulation of the complex biological phenomena known as “tumor migration and angiogenesis,” and in immunology with the modeling and prediction of immunogenicity of therapeutic proteins.
From your standpoint, is the program making progress?
C.T.: The BioIntelligence program brings together experts from very different fields: the life sciences’ biologists and scientists, bio-informaticians, and PLM leaders. These experts had to understand the diverse methodologies and constraints inherent in each discipline. This intimate, trans-disciplinary understanding was one of the consortium’s first successes. Once this step was complete, prototypes with a defined scope of functionality were developed and then tested by end users. These tests were an opportunity to get a glimpse of the potential for this type of modeling and simulation tool within an R&D framework. Live access to the digital experiment makes it possible to initiate a cultural change that is enabled by virtual solutions within drug discovery practices. In addition, working closely with the teams of modelers and developers at 3DS and SoBioS, another consortium partner, has taught Ipsen’s teams a new way to tackle knowledge management and the design of scientific experiments. It has been a very rich cross-fertilization experience. The next two years of the program will be focused on developing solutions with more and more features that are more integrated and interconnected within the 3DEXPERIENCE Platform from 3DS.
With these achievements, what are Ipsen’s expectations for the future?
C.T.: Given the explosion of data generation in the life sciences, and the extension of the industry’s end-to-end supply chains, including biotechs, CROs, etc., it is crucial that we employ solutions for global and social collaboration, intelligent information processing and analytics, experimentations and modeling, simulation and calibration. These technologies are necessary to holistically tackle today’s and tomorrow’s drug discovery and development challenges. The solutions we are working on for the BioIntelligence project are well aligned with these needs for new pipeline innovation and industrial performance. Finally, these solutions should allow the health industries to rethink their value chain, which ultimately will benefit patients.