Can AR Counteract Industrial Automation’s Dark Side?

Can AR Counteract Industrial Automation’s Dark Side?

AR Insider: Scott Montgomerie – August 29, 2019

Automation will create problems in the enterprise — AR can solve them

There’s no question that automation and artificial intelligence will profoundly reshape how work will get done. They could be as transformational as the IT era was to enterprise business just a generation ago. But how many jobs will be erased in the process? When a company unleashes AI, does it help or hurt its workforce?

I predict that enterprise companies will always need humans. They’ll always need to optimize productivity, improve job satisfaction, close skills gaps and shrink downtime. And today — not years from now — augmented reality platforms can uniquely solve problems where automation and the workforce intersect. Augmented reality can help retain, and even create jobs, that automation can never fill. By providing contextually aware information in a convenient and consumable format, workers now have the ability to pair innately human characteristics, such as critical thinking skills, with knowledge-on-demand to train and gain skill sets on the fly, with little to no previous experience.

Fact or fiction: the robots are coming for your job

By now, the fearful reports are familiar: AI will swallow entire categories of careers, from factory jobs to truck driving to customer service and middle-manager roles. So will AI really kill 20 million manufacturing jobs in the next decade, as Oxford Economics predicts?

I see widespread workforce augmentation as a far more likely outcome than wholesale workforce automation. It’s highly possible that AI can create more jobs than some people worry that it might eliminate. And there’s evidence: according to a report commissioned by ZipRecruiter, in 2018 alone, AI created three times more jobs than it destroyed.

Amazon — no stranger to automation and robot-assisted warehouse facilities — recently made global headlines when they announced a $700 million investment to re-train a third of its workforce with technical skills like coding. It’s a massive initiative that underscores how deeply committed they are to automation technologies, they’re also recognizing they’ll need a highly-skilled workforce to run them. Yes, there will be more robots at Amazon, not fewer. But the company is aiming to address skills gaps that will only widen in coming years. And it’s doing so by investing in 100,000 people, not just automation technologies.

This approach to automation is smart, for two reasons. For many employers, there are two key challenges to managing workforce costs: 1) training, ensuring your people have access to critical knowledge that is easy to find and consume in real-time and 2) retention of the workers you’ve already invested in. Replacing a highly-skilled worker can cost 400 percent of their annual salary, according to one estimate.

Your workforce is already changing

Transformational technologies like AI are advancing quickly, and more companies are finding ways to deploy them as they evolve. As businesses look to AI to reshape their workforce, it’s important to remember that the workforce is already changing, in very human ways.

At many U.S. companies, older employees are aging out of the workforce. According to the Wall Street Journal, the labor force is growing far more slowly than it did in decades prior. Overall productivity has also declined. And, older workers are staying in their jobs for years longer. Massive workforce re-skilling is an option, but it has a very real cost. (Just ask Amazon.)

However, losing a highly-skilled subject matter expert (SME) also carries a critical cost. Unilever, an enterprise customer of ours, told us that in the next five years, they’ll lose 330 years of experience to retirement — in a single facility alone. When an SME retires, your business shouldn’t lose a career’s worth of institutional knowledge. Augmented reality offers businesses an easy way to transfer this knowledge that new-hires need to be successful and retain it long-term to help build the next-generation of a skilled workforce.

Augmentation vs. automation: Why AR is the answer?

In industries like manufacturing, uptime is everything. When something goes wrong, it can adversely impact processes down the line. Faults and failures need to be monitored and corrected as quickly as possible. Human error is the source of nearly a quarter of all unplanned downtime in manufacturing, which cost trillions in losses to businesses each year. How can human error be minimized? AI is a long way off from identifying equipment failures and then automatically fixing them.

Augmented reality, at its core, is a new user interface — a way for humans to visualize and interact with data in more intuitive ways than before. Humans evolved to interact with the world with their hands and their eyes — interacting with 2D data like words and spreadsheets is merely an inaccurate abstraction, and underutilizes one of the most powerful parts of the brain – the visual cortex. The visual cortex enables a person to consume, filter and process vast amounts of information about the real-world, and utilizing this power to interface with the power of computers is an amazing opportunity. In this way, we can augment humanity by merging the best of both worlds; we can leverage the near-infinite and perfect memory capacity of networked computing power, along with the vast processing power of those computer systems, with the intelligent reasoning and extreme adaptability of the human mind and body.

Using this mix, we can leverage the strengths of both while overcoming the weaknesses of both. AI is far from generalized intelligence (although OpenAI is trying), and robots are far from perfect in actuating and interacting with the world. Augmenting humans with contextually relevant data and insights (potentially from IoT and AI systems) can be an extremely beneficial pairing.

In an enterprise context, AR can help workers alleviate downtime and more accurately assemble, repair or conduct maintenance on complex machinery. AR-assisted workers in manufacturing or field service can access contextual digital overlays and step-by-step instructions. They can access previously recorded support sessions, complete with AR annotations, to see how others solved a problem or completed a task on the exact same piece of equipment on which they’re working. And workers can even initiate a live, AR-enabled video session with a remote expert who can see what they see, and talk them through a task, dropping in pre-built AR instructions or drawing on the worker’s real world view to help along the way. It’s expert knowledge, on-demand, shareable across the enterprise and accessible exactly when it’s needed.

While some worry about the impending automation apocalypse as the ultimate job eliminator, AR can create opportunities to build a smarter workforce that will exist alongside automation tools like robots and AI. It’s an ideal platform for transferring and retaining expertise from experts to those learning new skills, regardless of physical location. It can also be used to bridge your company’s data and your employees in the real world, boosting productivity and minimizing costly downtime. And, when workers are more productive and better at their jobs, overall job satisfaction improves, and that’s a win for everyone involved.

Scott Montgomerie is CEO of Scope AR. Since co-founding the company in 2011, Montgomerie was one of the first executives to get augmented reality (AR) tools in use by multi-billion dollar corporations

A version of this article previously appeared in VentureBeat, and has been repurposed here with the author’s permission.

Read original article at AR Insider

Scott Montgomerie on Startups, Y Combinator, and building spacecraft with AR

Scott Montgomerie on Startups, Y Combinator, and building spacecraft with AR

The AR show: podcast – August 20, 2019

Scott Montgomerie is the co-founder & CEO of Scope AR, the first augmented reality knowledge management company.

Earlier in his career, Scott started his first job writing code at 17, and his company got acquired by Intuit. After several years there, he set off to become a serial entrepreneur.

Since founding Scope AR in 2011, Scott was one of the first to get multi-billion-dollar companies to use augmented reality tools. He and his team have simplified adoption and deployment of AR across a number of industries, and today, Scope AR address challenges around problem resolution and guided work instruction experienced by Toyota, Lockheed Martin, Unilever, Prince Castle, and others.

The Conversation

In this conversation, Scott shares how he juggled going to college while working at the startup that was acquired by Intuit. He talks about his journey as an entrepreneur and shares some epiphanies he had along the way.

He goes on to share his early explorations with AR, as well as his experience taking Scope through Y Combinator, a highly selective startup accelerator.

He also discusses the significant impact Scope’s products are having on his customers, including the return on investment, and where the company is headed in the future.

Listen to the podcast here

Lockheed Martin Embraces AR on the Shop Floor

Lockheed Martin Embraces AR on the Shop Floor

EETimes: George Leopold – August 16, 2019

Augmented reality tools are being used to manufacture the next U.S. manned spacecraft.

Augmented reality is gradually moving to the factory floor as aerospace and other manufacturers embrace the technology to help train technicians. The goal is to reduce assembly errors and boost productivity while saving time and money.

Among those adopting AR is the world’s largest military contractor, Lockheed Martin Corp., which is working with software developer Scope AR to develop how-to manuals that include animations for assembling spacecraft components. The partners said the collaboration has reduced the time required to interpret assembly instructions by 95 percent, along with an 85 percent reduction in overall training time and a more than 40-percent boost in productivity.

Lockheed Martin first implemented AR technology in 2017 within its space division, which is currently building the NASA’s Orion spacecraft.

Shelley Peterson, Lockheed Martin’s augmented technology project leader, said AR tools are being used to assembled various Orion components, including the skeletal framework of the spacecraft’s titanium heat shield that must withstand re-entry temperatures as high as 5,000 F.

San Francisco-based Scope AR’s tools also have been used for spacecraft components like cable assemblies and instrument panels, as well as the forward bay where the Orion crew seat module is situated. AR technology is used, for example, to develop the work instructions for drilling and torqueing steps, Peterson said.

Peterson also noted in an interview that technologies like Scope AR’s software and Microsoft’s Hololens “mixed reality” tool have helped accelerate the interpretation and presentation of workflow data ranging from assembly, manufacturing, test and maintenance steps. That translates into time savings and reductions in touch labor for the narrow tolerances required for fasteners, transducers, accelerometers and other spacecraft components.

In one example, Peterson said the Lockheed Martin’s space unit has realized a roughly $38 savings per fastener. This for an aerospace manufacturer that buys more than 2 million fasteners a year.

The company said AR allows it to create workflows more rapidly than traditional methods, although Peterson said existing design data can be used to supplement AR-based work instructions. AR software also can be used to add part identifiers or color coding of parts. Assembly steps can then be animated.

Lockheed Martin is developing a reputation as an early adopter of disruptive technologies. Previously, it has invested in a quantum computing center focused on challenges such as using the added computational power to debug millions of lines of mission-critical code.

For its part, Scope AR has gradually developed industrial use cases for its software, starting with training assembly workers and eventually partnering with global manufacturers like Lockheed Martin, Boeing, Siemens and Toyota. It claims to be the first AR vendor to develop an “enterprise-class” AR video platform for Microsoft’s Hololens.

CEO Scott Montgomerie said surgical application of AR technology works best, with the Lockheed Martin use case illustrating how a specific project like Orion can benefit from what Montgomerie calls “real-time knowledge transfer.”

That augmented knowledge includes step-by-step instructions, animations in the form of digital overlays and live support from remote experts. “You don’t want to add another layer of process,” Montgomerie explained in a recent blog post. “You want to ensure workers can access knowledge from subject matter experts or resources….”

Read original article at EE Times

How Lockheed Martin is Using Augmented Reality in Aerospace Manufacturing

How Lockheed Martin is Using Augmented Reality in Aerospace Manufacturing

engineering.com: Isaac Maw – August 13, 2019

Lockheed Martin is famous for engineering innovation, dating back to the legendary Skunkworks. Today, the defense contractor is making use of innovative augmented reality technology in their manufacturing process and across entire product lifecycles. Lockheed Martin’s AR project began in the Space Systems division, for example in assembly and quality processes for NASA’s Orion Spacecraft, but has been so successful that the company has deployed the Microsoft Hololens hardware and Scope AR software in other divisions, namely Aeronautics, Missiles and Fire Control, and Rotary and Mission Systems. The company may even send Hololens to space on crewed missions to support training and maintenance tasks.

To find out more about what Lockheed Martin is doing with AR, engineering.com spoke to Shelley Peterson, Principal Investigator for Augmented and Mixed Reality for Lockheed Martin Space Systems, and Scott Montgomerie, co-founder and CEO of Scope AR, the AR software Lockheed Martin is using. Check out our conversation below.

Why did Lockheed Martin start using augmented reality?

SP: The way that our spacecraft are built involves drawings, models, and lots of data that has to be interpreted. For space, we’re often building a small number of spacecrafts. We have programs that have higher volumes, but in many cases, we’re interpreting data in almost every build. And that interpretation takes time. When you can place data spatially, there’s just a significant advantage. It removes so much of the interpretation. We’ve seen in the past that it takes about 50% of the time to go through all of that data and to make sure, and work with peers to make sure the action that’s about to be taken is the correct one.

Of course, when building spacecraft, we need to be very cautious during the actions that we’re taking in the manufacturing process. With 50% of the time being spent just on interpreting the data, our leadership gave it a name—information overhead—and we set a goal to optimize it.

What sort of data are we talking about here? Assembly instructions?

SP: It spans across assembly, the manufacturing process and to assembly launch and test operations, as well as use cases in maintenance and sustainment. We felt that manufacturing was a great place to start because it’s kind of an easy place to measure. We can calculate how much time is being saved, and the other benefits of the technology.

What type of personnel wears and uses the AR in their jobs?

SP: It’s the technician on the shop floor along with the quality engineers who support them. And then these technicians and engineers work with us to build the AR content. They provide us information and feedback on what content needs to be built, and in some cases they also build the content.

For example, for position alignment. When we have any objects that have to be placed with 0.5” tolerance, such as fasteners, strain gauges, transducers, heaters or misters, accelerometers, for example, or hook-and-loop fasteners (There’s a lot of Velcro and tape on spacecraft!) Just for fasteners, we’re seeing about a $38 savings in touch labor per fastener. Lockheed purchases over 2 million fasteners per year, to put that in context. I think that capability applies to all of those other components that I just mentioned.

We’ve also seen drilling and torque applications. We’re working on activity right now which is the start-to-finish work instruction for the Orion crew seat module, to build up the crew seat from start to finish just with AR. So, we started off with shop aids that supplement pre-existing resources, and we’re still doing that, but we’re also adding in a work instruction that is really a start-to-finish assembly process.

There’s an old saying, “There’s no repair shop in space”. Quality is obviously important in this industry. What can you tell me about how AR is used in quality and inspection?

SP: When the quality department is looking to verify component placement in assembly, AR is a very quick and easy way to do that and also when they verify the work instruction in advance, it’s much quicker to do that through AR than to try to verify from traditional methods. Many of the support organizations for the manufacturing process are benefiting as well.

I imagine that it doesn’t require much, if any, training for the operators to interpret the content. Is that right?

SP: That’s correct, yes. Because it’s spatially referenced in the environment, they’re often seeing exactly what they expect. It’s just in a much more easily accessible format.

Who creates the AR content? And do they require special training to be able to generate AR content?

SP: It’s normally within the engineering and technology organization, and we have manufacturing engineers within that organization who build the content. Scope AR has a training package online, and as soon as the engineer has their computer configured, we point them to the online training. It takes about a day to go through that training and then they’re set. The content actually uses the same models as the original CAD for the assembly.

How does the time to create AR content compare to the time it would take to create more traditional training materials or instructions?

SP: We can create content much more rapidly in AR, and we’re seeing that in many activities. For example, in some cases we’re placing objects such as standoffs, strain gauges and transducers across curved surfaces, and we have typically used Mylar templates. With AR, we’ve had scenarios where we can build the AR content before the Mylar could be ready. So, there’s the process to create the work instruction in the traditional method, but then there’s also the process to create additional resources like the Mylar for that traditional process, or to align a laser projector, which can take weeks. We typically build these set position alignment capabilities in less than a day, sometimes in a couple of hours.

SM: If you think about creating traditional work construction, essentially you’re taking an idea, or a set of process steps, that have been invented in the brain of a mechanical engineer and then you’re going through a translation process to put them into words and images, and in Lockheed’s case special Mylar templates to just perform any instruction. And so that, takes up a lot of time and energy.

SM: When those traditional instructions get to the technician, they’re doing a mental mapping, reading those texts, deconstructing what to do with those images, and then using a pile of calculations to understand what they need to do, and where. This type of instruction introduces a lot of inefficiency from tracing the conceptual idea to translating it into words or images, translating it back from words and images, and on to the work. So, augmented reality is just a really intuitive way of translating the future that it’s conceptually in the mechanical engineer’s head in a much better more intuitive way and delivering that knowledge to a front end technician to carry out the instruction.

That’s interesting, because the instinct might be that AR would be useful for high volume production, where you make one up-front time investment to create content for the AR system, and then you use it multiple times. But for such low production volume as the space industry, it’s essential that it’s a quick, easy process to create the content.

SP: Yes. We have to have that in order to have a sufficient return on investment, for it to make sense. And what we’ve seen so far is we’re seeing positive ROI in first application. There are definitely additional advantages when we’re using it across high volumes, but we’re seeing value in initial build of spacecraft.

Shelley, you mentioned the laser projection technique as an existing alternative. That makes me wonder, with what kind of precision can you have an AR instruction align on a physical surface?

SP: We’re seeing 0.1” accuracy. We have many components on the aircraft that are at a 0.5” tolerance requirement, so that 0.1” half inch tolerance requirements well within range.

So, this isn’t just one lab trying out AR. It’s really a full deployment of the technology.

SP: Depending on how we phrase full deployment, yes. Full deployment is definitely within the roadmap.

Very exciting. So, what are the next steps? Where do you think Lockheed Martin could improve upon your use of AR technology?

SP: So, we’re looking at how the technology enables full digital threads. So, we’re looking at everything from early stages and concept design, and design through to manufacturing, assembly launch and test operations, launching host operations, maintenance and sustainment, even astronauts in space. We see benefits in having the data available across an entire life cycle, and not only for the flight data, but also for the test equipment and other aspects of our environments that support the manufacturing and operations of that resource.

We started with manufacturing because it’s the easiest to quantify and gives us a really good look at ROI, but it spans easily into those other areas, into design into operations, and other elements. Anytime people take paper or computer resources to transfer ideas to something in the real world, there’s opportunity for AR.

Scott, we understand how Lockheed Martin uses Scope AR. But what other manufacturing applications or industries can benefit from AR?

SM: Yeah, I think Shelley provided a really great overview of aerospace. We’re really excited about our AR applications in aerospace. In really high volume, to go to the complete other extreme, we’re actually pretty strong in consumer products goods (CPG) manufacturing. Scope AR has a number of different use cases on assembly lines. Where we’re seeing a lot of benefit there is reducing downtime on the assembly line. Reducing errors in simple procedures can have a major impact on high volume manufacturing lines.

SM: For example, in clean facilities, where they make food products. With the high turnover of operator personnel, they’re fine doing their jobs day to day, but when something goes wrong and line breaks down, there’s not really somebody on the line who knows how to fix it. In the old world somebody would be able to fix it, but they’re not there in the clean room the entire time, so now they’re somewhere else and the time from things going wrong, to the time that person can actually get on site, and start diagnosing it and fixing can lead to an extended period of downtime. Using our remote assistance application, the frontline technician simply calls up the expert, who is somewhere else, and can begin diagnosing it immediately after an error, and the expert quite often doesn’t even need to come on site. Just a simple procedure can guide that frontline technician through how to do the repair, and get it back up and running much, much, quicker. I can tell you that in some cases we’re seeing, you know, about a 50% reduction in downtime with those types of use cases.

SM: And then secondarily, AR work instructions can prevent errors as well. So, again, frontline personnel often see high turnover and training is an issue. Simple mistakes such as installing something improperly can cause significant downtime. So, by using AR work instructions to show these frontline workers how to perform these tasks, they’re able to redue error rates and improve the outcomes that way, as well.

SP: And we see error prevention, too. Often you don’t know if an error was prevented, but we’ve seen proof on a number of activities, we’ve seen proof where AR has prevented an error from occurring. In the space industry, on some programs an error can lead to a day of delay, and a day of delay can be over $1 million cost per day.

Read the original article at engineering.com

Scott Montgomerie on Startups, Y Combinator, and building spacecraft with AR

AR keeps people in the equation and makes their jobs better

Strtup Boost: Jason Malki, August 12, 2019

“AR keeps people in the equation and makes their jobs better” a peek into the future with Scott Montgomerie CEO of Scope AR

I had the pleasure of interviewing Scott Montgomerie — co-founder and CEO, Scope AR.

Since co-founding Scope AR in 2011, Scope AR’s CEO Scott Montgomerie was one of the first executives to get augmented reality (AR) tools in use by multi-billion dollar corporations. Through envisioning and developing some of the most transformative enterprise AR technology on the market, Scott and his team have simplified adoption and deployment of AR across numerous industries, addressing complex challenges experienced by Toyota, Lockheed Martin, Unilever and Prince Castle, among others.

Having launched many AR firsts, Scott has become one of AR’s top thought leaders and visionaries for the space. He has shared his knowledge and spoken about some of the most innovative uses of AR at several of the industry’s top conferences, including AWE (US and Europe), Unity Vision AR/VR Summit, Virtual Reality Strategy Conference and VRDC.

Prior to founding Scope AR, where he manages day-to-day operations and is responsible for product development and driving the company’s technology team, Scott was the VP of Engineering at Xfire, Inc., which developed an in-game communications platform for the e-sports community. Prior to that, Scott launched his first company, Zigtag Inc. (later renamed Semanti Corp.) which was focused on building smart social search solutions to help consumers efficiently find, retrieve and share personally relevant information on the web. In addition to more than 15 years of consumer software experience, Scott is also a full-stack developer, ranging in everything from iPhone games to Ruby-on-Rails corporate tools.

Scott graduated from the University of Alberta with a Bachelor of Science degree in Computing Science and is a published researcher in the field of Bioinformatics. He is a Canadian transplant that currently lives in San Francisco.

Thank you so much for joining us! Can you tell us a story about what brought you to this specific career path?

Coming out of high school, like most people, I didn’t really know what I wanted to do. I learned programming as a teenager as a hobby, but I considered most people who spent a lot of time with computers as “uncool,” so I never really took it seriously. I entered University studying biology at first. As luck would have it though, the summer after high school, a friend of mine had a neighbor who was building a startup company in the tax software space, and they were looking for someone who understood computers and French, as the job entailed essentially translating the software into French. By the end of the summer, I was programming tax calculations, and within six months I was basically in charge of the French product. A few years later, we were acquired while I was still in University, and I took over as lead developer on the product.

After I graduated, they moved me into an innovation group with an elite group of business people and programmers, and we were essentially tasked with evaluating business ideas, incubating them and then taking the winners to market, just as you would with a startup. Over two years, we evaluated dozens of ideas and launched two products to market, so it was an amazing education in terms of business building.

Can you share your story of Grit and Success? First can you tell us a story about the hard times that you faced when you first started your journey?

Like most startups, Scope AR — the pioneer of enterprise-class augmented reality (AR) solutions — has faced a few near-death experiences over the years. One in particular stands out. Back in early 2015, the founders had all quit their day jobs to become more-or-less full time with Scope AR and I moved to San Francisco to begin fundraising. Having no network there and never having fundraised before, this proved more challenging than I had ever thought. I managed to get some introductions to some angel investors, and after a few months, attracted some interest.

But a low point came one particular week. I had managed to scrape together about $600K in financing, led by one angel in particular. The angel put us through the grinder in terms of due diligence — now that I know what I’m doing, this would be similar due diligence required during a Series B, but unheard of for a small angel round. It was the week we were supposed to close, and the lead angel called and said, “I’m out. I don’t really like the patent landscape, there are just too many patents in the space.” I was furious — wouldn’t this be one of the first things you’d look into in your due diligence? The rest of the week, I had to go to the rest of the participating investors and tell them the lead had backed out, and one-by-one they all dropped out as well. I had completely lost what would have been our seed round.

That Friday, after the final investor rejected me, I said, “this week just needs to end.” But as luck would have it, this trying week was the pivot point for the company. I was at a bar that Friday evening and had a suitcase with me that was my portable demo, and a guy sat down beside me and we started chatting. He eventually asked, “So what’s in the suitcase?”

Me: “Oh, it’s just my demo.”

Him: “Oh, what do you do?”

Me: “Augmented reality.”

Him: “No, way, me too! We just got acquired by Google.”

Me: “How?”

Him: “We went through this program called Y-Combinator, you should apply.”

So, I applied the next week, and we got in. By the time YC’s check arrived, we had $10K in the bank (our payroll at the time was about $40K/month!), and after YC we went on to raise our seed round of about $2M.

Where did you get the drive to continue even though things were so hard?

I guess I felt that our idea had legs. I’ve wanted to build a new startup ever since that first tax startup acquisition, and this was by far my best shot at it. When we showed off the first prototype of our vision, the customer response was so tremendously positive that it would have been a crime to not pursue it. And when things get hard, I think about our vision for the future, all the things we’ve accomplished and all that we’ve sacrificed, and how our early customers believed in us, and there isn’t much of a choice but to keep going.

I’ve always tried to make the world a better place, and when I think of all the positive things augmented reality can do for workers, that’s hugely motivational. It also doesn’t hurt that it’s just really cool technology. But really, I think the idea of using augmented reality as a new user interface that can bring together the vast memory capacity of the internet, with real-time IoT data and AI to deliver contextually relevant information, is incredibly powerful and has the potential to be fundamentally transformative for the better. And unlike AI, that promises to replace people, AR keeps people in the equation and makes their jobs better, which is incredibly motivational for me.

So, how are things going today? How did Grit lead to your eventual success?

The company is doing well. We’re growing revenue, signing up customers and just raised a successful Series A round in late March. The market took longer to mature than we had expected, but there are very encouraging signs that it’s about to take off in a big way.

Which tips would you recommend to your colleagues in your industry to help them to thrive and not “burn out”?

Initially, I believed that to be successful you needed to work seven days a week. I did that for about three years, but that’s absolutely not sustainable. At some point, you ask, “Why am I doing this? What do I hope to accomplish?” At first, I was willing to sacrifice everything for the company — including time with my wife and family — but then you start to realize that there has to be other things in life. I remember a particular weekend where we had a big problem, and my sister was visiting and we had planned to go wine tasting in Napa Valley. I ended up pulling an all-nighter on Friday night, and my wife graciously took my sister to Napa without me while I continued to work.

That was a bit of an eye-opener for me that there must be a balance. You can definitely go hard for a while, but at some point, you’ll burn out and it’ll all be for nothing. Building a company is definitely a marathon, so you need to take care of yourself sustainably, which means maintaining a healthy personal life to complement a work-intensive work life.

None of us are able to achieve success without some help along the way. Is there a particular person who you are grateful towards who helped get you to where you are? Can you share a story?

I think the support of one of our earliest investors was pretty pivotal in our development. Through his introductions, I was able to grow my network of advisors, and through his support with tough decisions, we got to where we are today.

I also would not be anywhere close to where I am without the support of my wife. There have been many moments where I probably would have gone off the deep end without her constant selfless support.

How have you used your success to bring goodness to the world?

I like to help other entrepreneurs. Given a lot of the pain I’ve gone through, I wouldn’t wish that on anyone, so if there’s a way I can ease some of that pain and smooth the way, I’m happy to help mentor others in their journey. In particular, being a Canadian in San Francisco, I try to help fellow Canadian entrepreneurs with their businesses and pitches and make introductions to angels and VCs when I can.

Based on your experience, can you share 5 pieces of advice about how one can develop Grit?

Do hard things. When I was young, my parents would always say I found out the hardest way to do something, but then mastered it. I guess that conditioned me to embrace failure. Doing hard things always challenges you to grow, and without growth, you can never achieve greatness.
Embrace failure. If you embrace failure, you’ll try more things, instead of being afraid of failure and never truly “succeeding”.

Take risks. No one ever got ahead without taking risks. Most people are happy to have a 9–5 job so they can have a nice comfortable life. This does not lead to developing grit.

Be vulnerable. This is one I’ve been working on quite a bit, and it’s certainly not natural for me. But I’ve learned it is incredibly important for one’s self to be able to admit failure (see embracing failure) and one’s leadership abilities and abilities to develop relationships. It’s the basis of authenticity and somewhat on trust.

Be resilient. I heard recently someone say that college basketball scouts look for two traits — a basic level of skill, and how well a candidate picks themselves up after a failure. If they get down on themselves and lose confidence, they’ll never achieve greatness, but if they understand that falling is simply another step towards success, they’re much more likely to succeed in the league.

You are a person of great influence. If you could start a movement that would bring the most amount of good to the most amount of people, what would that be? You never know what your idea can trigger. 🙂

I’m actually pretty passionate about the environment. We’ve only got one world to live on, and we need to take better care of it for our children’s sake. I’m certainly not a crazy hippy or anything, I don’t drive a Tesla (although I definitely would if they had a convertible), but I am conscious about simple things like material waste, recycling, composting, and just being efficient with everything. I despise services that ship you a ton of waste, and I’m a huge fan of companies like Zume, who are trying to fix the food supply chain by eliminating travel costs and packaging waste.

How can our readers follow you on social media?

They can follow me on Twitter at @smontgomerie. They can stay in tune with what Scope AR is doing by following us on Twitter, Facebook orLinkedIn.

This was very inspiring. Thank you so much for joining us!

Read the original article here

Automation will create problems in the enterprise — AR can solve them

Automation will create problems in the enterprise — AR can solve them

VentureBeat: Scott Montgomerie – August 8, 2019

There’s no question that automation and artificial intelligence will profoundly reshape how work will get done. They could be as transformational as the IT era was to enterprise business just a generation ago. But how many jobs will be erased in the process? When a company unleashes AI, does it help or hurt its workforce?

I predict that enterprise companies will always need humans. They’ll always need to optimize productivity, improve job satisfaction, close skills gaps and shrink downtime. And today — not years from now — augmented reality platforms can uniquely solve problems where automation and the workforce intersect. Augmented reality help can help retain, and even create jobs, that automation can never fill. By providing contextually aware information in a convenient and consumable format, workers now have the ability to pair innately human characteristics, such as critical thinking skills, with knowledge-on-demand to train and gain skill sets on the fly, with little to no previous experience.

Fact or fiction: the robots are coming for your job

By now, the fearful reports are familiar: AI will swallow entire categories of careers, from factory jobs to truck driving to customer service and middle-manager roles. So will AI really kill 20 million manufacturing jobs in the next decade, as Oxford Economics predicts?

I see widespread workforce augmentation as a far more likely outcome than wholesale workforce automation. It’s highly possible that AI can create more jobs than some people worry that it might eliminate. And there’s evidence: according to a report commissioned by ZipRecruiter, in 2018 alone, AI created three times more jobs than it destroyed.

Amazon — no stranger to automation and robot-assisted warehouse facilities — recently made global headlines when they announced a $700 million investment to re-train a third of its workforce with technical skills like coding. It’s a massive initiative that underscores how deeply committed they are to automation technologies, they’re also recognizing they’ll need a highly-skilled workforce to run them. Yes, there will be more robots at Amazon, not fewer. But the company is aiming to address skills gaps that will only widen in coming years. And it’s doing so by investing in 100,000 people, not just automation technologies.

This approach to automation is smart, for two reasons. For many employers, there are two key challenges to managing workforce costs: 1) training, ensuring your people have access to critical knowledge that is easy to find and consume in real-time and 2) retention of the workers you’ve already invested in. Replacing a highly-skilled worker can cost 400 percent of their annual salary, according to one estimate.

Your workforce is already changing

Transformational technologies like AI are advancing quickly, and more companies are finding ways to deploy them as they evolve. As businesses look to AI to reshape their workforce, it’s important to remember that the workforce is already changing, in very human ways.

At many U.S. companies, older employees are aging out of the workforce. According to the Wall Street Journal, the labor force is growing far more slowly than it did in decades prior. Overall productivity has also declined. And, older workers are staying in their jobs for years longer. Massive workforce re-skilling is an option, but it has a very real cost. (Just ask Amazon.)

However, losing a highly-skilled subject matter expert (SME) also carries a critical cost. Unilever, an enterprise customer of ours, told us that in the next five years, they’ll lose 330 years of experience to retirement — in a single facility alone. When an SME retires, your business shouldn’t lose a career’s worth of institutional knowledge. Augmented reality offers businesses an easy way to transfer this knowledge that new-hires need to be successful and retain it long-term to help build the next-generation of a skilled workforce.

Augmentation vs. automation: Why AR is the answer?

In industries like manufacturing, uptime is everything. When something goes wrong, it can adversely impact processes down the line. Faults and failures need to be monitored and corrected as quickly as possible. Human error is the source of nearly a quarter of all unplanned downtime in manufacturing, which cost trillions in losses to businesses each year. How can human error be minimized? AI is a long way off from identifying equipment failures and then automatically fixing them.

Augmented reality, at its core, is a new user interface — a way for humans to visualize and interact with data in more intuitive ways than before. Humans evolved to interact with the world with their hands and their eyes — interacting with 2D data like words and spreadsheets is merely an inaccurate abstraction, and underutilizes one of the most powerful parts of the brain – the visual cortex. The visual cortex enables a person to consume, filter and process vast amounts of information about the real-world, and utilizing this power to interface with the power of computers is an amazing opportunity. In this way, we can augment humanity by merging the best of both worlds; we can leverage the near-infinite and perfect memory capacity of networked computing power, along with the vast processing power of those computer systems, with the intelligent reasoning and extreme adaptability of the human mind and body.

Using this mix, we can leverage the strengths of both while overcoming the weaknesses of both. AI is far from generalized intelligence (although OpenAI is trying), and robots are far from perfect in actuating and interacting with the world. Augmenting humans with contextually relevant data and insights (potentially from IoT and AI systems) can be an extremely beneficial pairing.

In an enterprise context, AR can help workers alleviate downtime and more accurately assemble, repair or conduct maintenance on complex machinery. AR-assisted workers in manufacturing or field service can access contextual digital overlays and step-by-step instructions. They can access previously recorded support sessions, complete with AR annotations, to see how others solved a problem or completed a task on the exact same piece of equipment on which they’re working. And workers can even initiate a live, AR-enabled video session with a remote expert who can see what they see, and talk them through a task, dropping in pre-built AR instructions or drawing on the worker’s real world view to help along the way. It’s expert knowledge, on-demand, shareable across the enterprise and accessible exactly when it’s needed.

While some worry about the impending automation apocalypse as the ultimate job eliminator, AR can create opportunities to build a smarter workforce that will exist alongside automation tools like robots and AI. It’s an ideal platform for transferring and retaining expertise from experts to those learning new skills, regardless of physical location. It can also be used to bridge your company’s data and your employees in the real world, boosting productivity and minimizing costly downtime. And, when workers are more productive and better at their jobs, overall job satisfaction improves, and that’s a win for everyone involved.

Since co-founding Scope AR in 2011, CEO Scott Montgomerie was one of the first executives to get augmented reality (AR) tools in use by multi-billion dollar corporations.

Read original article.

Are You Ready for Extended Reality (XR)?

Are You Ready for Extended Reality (XR)?

Digitalengineering247.com: Kenneth Wong, August 1, 2019

Identifying the right use case is key to getting the most out of enterprise augmented reality.

In Volvo’s research facilities in Sweden, test drivers are becoming accustomed to operating future car models that only exist as pixels. The car on the road is a physical vehicle; but what the drivers see is something else. Outfitted with Varjo’s XR-1 augmented reality (AR) device, they see the interior of a car not yet manufactured.

“With this approach, we can, for the purpose of evaluation, use different virtual display options of the dashboard to see how drivers perceive them while driving the car. So, wearing the XR-1 headset, the driver ‘sees’ the virtual dashboard in the car, which in reality does not yet exist,” Volvo’s press office explained in an email to Digital Engineering.

It’s just one example of how AR fuses physical and virtual products, and enables design review and testing that is otherwise impossible. Ray-traced, rendered videos give you impressive visuals. Software-based simulation helps you figure out how a product might fail. But AR lets you experience a virtual product as though it were physically present. From automotive and aerospace to consumer goods, many manufacturers are looking at extended reality as the new frontier in product development.

But AR enterprise adoption is not plug-and-play. Without adequate preparation, projects can easily go awry. For this article, we spoke to those who have gone through the journey to understand what it takes.

Driving Pixels

Headquartered in Helsinki, Varjo launched its first virtual reality headset called VR-1 in January this year, calling it “the first human-eye resolution VR.” In its announcement, the company wrote that VR-1 has “a resolution of more than 60 pixels per degree, which is 20X+ higher than any other VR headset currently on the market. VR-1 also comes with the world’s most advanced integrated eye tracking, enabling high-precision analytics and interaction with human-eye resolution VR content.” In May, the company launched its first AR headset, called XR-1 Developer Edition, using the term XR for extended reality.

The eye-tracking technology in VR-1 and XR-1 allows the user to use their eyesight—and where they choose to focus—as a pointer. Without the need to use controllers to select and execute commands, the system leaves the user’s hands free for other tasks. In driving simulation, the feature is particularly important as the test driver needs his or her hands to control the steering wheel.

“The highly accurate eye-tracking technology embedded inside the XR-1 makes it easy to assess how drivers use a new functionality and whether they are distracted in any way while driving,” said Volvo’s press office. “This technology-based approach to measuring distraction levels ensures that Volvo Cars can develop new features without causing additional distraction. Therefore, wearing the XR-1 headset while actually driving gives us real insights, which we can take into the development of our cars.”

Volvo is not just a customer of Varjo, but also an investor. The Volvo Cars Tech Fund is supporting the startup headset maker’s ongoing developments. “For using mixed reality (MR) in product development most optimally, it should be integrated into existing workflows to enhance and improve existing systems rather than creating completely new ones from scratch,” advised Volvo.

Props Make a Difference

While Elizabeth Baron was working as the immersive reality technical specialist for Ford, she oversaw the creation of an VR setup that let engineers perform surface highlights on vehicles that had not yet been built. To replicate the way automotive engineers would shine a light on a car to observe the reflections, she assembled suitable VR headsets with position tracking.

To make the VR session much more realistic, Baron made one significant tweak. “I went to the dollar store nearby, bought a bunch of 12-in. flashlights, and tracked them,” she recalls. “In most cases, it’s advantageous to have a physical object that represents the real device the user would naturally be using for the task.”

In shape, proportion and function, the cheap flashlights were much closer to the real equipment the engineers would use for their routine surface highlight tests. The modification, which came at a small cost, made the VR setup so convincing that, when the session was over, the users were often attempting to switch off the dummy flashlights, which were never turned on to begin with. “We always got a kick out of watching them do that,” Baron says with a chuckle.

Baron left Ford in the beginning of 2019 to start her own firm, Immersionary Enterprises. She recently started a collaboration with Silverdraft Supercomputing. “I want to work with enterprise clients on their XR journey, to work on both their culture and technology to enable better collaboration,” she says.

The New Medium for Collaboration

If you peel off the glossy visuals in many multi-user MR applications, you’ll often find underneath the all-too-familiar features from WebEx, Skype and FaceTime. At its core, the NVIDIA Holodeck is nothing but a massive group chat environment with ray-traced visuals. Certainly, inside the Holodeck, participants get a much deeper understanding of the design, engineering and manufacturing issues they face because they can inspect a life-size digital replica of the product as though they were standing in front of it. But the tools for voice communication, text messaging and screenshot snapping are nearly identical to those found in Skype or Facebook messenger. This is both good and bad.

It’s good because the similarities allow users to ease into the new medium without significant culture shock. But bad because the same similarities may make users miss the fundamentally different ways an XR workflow needs to be supported.

It’s fairly straightforward to set up and support a multi-user collaboration system with the backend mechanism to automatically capture and archive the sessions on a public or private cloud. But setting up and supporting a similar type of workflow in AR, VR or MR, however, has different storage requirements, due to the large amount of 3D data and photorealistic video streams involved.

Baron designed a global immersive collaboration paradigm in 2012 and did the first immersive review between Michigan and Australia in 2012. “I saved a lot of XR discussions to record what happened in the meeting, thinking people would go back to review them, but nobody did,” reveals Baron. “So we learned that saving a summary of what was learned during the session is a better strategy.”

Identifying the Right Use Case

Realtors like to quip, “There are three important aspects to buying and selling properties: Location, location and location.” Ask David Nedohin, president and cofounder of Scope AR, about AR adoption and you’ll get a similar response.

“The three most important aspects to successful enterprise AR engagement are the right use case, use case and use case,” he says. “Companies have to figure out the use cases that make the most sense. After that, then they can align the necessary workflow with the current technology available and figure out how to support that.”

One general use case Scope AR is betting on is remote assistance. To enable it, the company offers an integrated AR content-authoring platform called WorkLink. The platform allows organizations to create and publish AR-powered work instructions.

“The software facilitates real-time remote assistance video calls between a technician and a remote expert. While on a live video call, the expert can see the real-world view of a colleague on the shop floor, for example, and walk him or her through a repair or maintenance procedure by annotating the view with animated, 3D digital content or by dropping in a set of pre-built AR instructions for the technician to follow,” explains Nedohin.

Although Scope AR’s software remains hardware agnostic, Nedohin believes certain AR gear offers clear advantages over others. “To me, without a doubt, the Microsoft HoloLens 2 with its computer vision is one of the best devices for our applications,” he notes. “Aside from camera and processor power, it largely comes down to the use of top-of-the-line computer vision technology to map out the natural environment for virtual object placement.”

An Easy Entry to AR

Hardware makers such as Lenovo and Epson are also gunning for remote expert assistance as a low-barrier entry to AR. At the recent Augmented World Expo (AWE, Santa Clara, CA, in late May), Lenovo launched its first enterprise-targeted AR glasses, called ThinkReality A6. To attract application developers, the company also released the ThinkReality software platform, which includes sample apps for AR/VR applications. One of them is a remote expert communication app.

Around the same time, Epson launched Moverio Assist, a System-as-a-Service product to set up and deliver remote expert assistance via Moverio AR glasses (specifically for Moverio BT-300, Moverio BT-350 and Moverio BT-350 A). Users buy the supported AR glasses and supply their own expertise, but Epson provides the cloud-hosted communication pipeline to let the field technician and the expert connect, troubleshoot and share files.

“For remote assistance, you just need a set of basic features: front- and back-facing cameras, two-way audio and file sharing. We see this as an easy onramp to get companies up and running in AR. It’s self-service, no onboarding process,” explains Leon Laroue, technical product manager for Epson Moverio.

With Epson’s Moverio AR glasses, the field technician may from time to time use a portable smartphone-size pointer to select and open files, but for the most part they can work hands free. Compared to the clumsy use of a smartphone’s video camera to transmit and work on machinery at the same time, the AR-powered approach is a better alternative.

“We built the Moverio Assist with scale in mind, so it doesn’t matter if you’re a company with five or 5,000 users. It’s ideal for the industries where, if your machines are down, appliances need to be repaired or equipment needs to be installed, downtime is measured in hundreds of thousands of dollars,” says Laroue.

With an AR-based remote expert program, the expert sitting behind a computer can guide a field technician to perform certain complex tasks that require a deeper level of knowledge. This approach cuts down on the expert’s onsite visits, and allows him or her to service more sites and handle more cases.

Hands Free, Gesture- and Geometry-Aware

Over the years, AR and VR gear has improved in form factor, resolution and function. The latest generation is much lighter, making it easier to wear and work for an extended period. Many now include or are striving to include hand tracking, gesture recognition and environment awareness.

At February’s Mobile World Congress in Barcelona where the Microsoft HoloLens 2 debuted, perhaps one of the most groundbreaking moments was when Julia Schwarz, Microsoft’s principal software engineer for HoloLens 2, played a virtual piano by tickling the invisible ivories with her real fingers.

With the HoloLens 2, Microsoft incorporated gaze and air tap functions. This allows the user to use their head gaze as the targeting mechanism (what you would normally do with a mouse pointer on a flat screen); and the tap gesture in the air as the trigger mechanism (the equivalent of a mouse click on a flat screen).

“Tracking technology has blossomed, and today’s headsets give you better pixel density. They’re a lot lighter. They know where you are. They’re much better at anchoring virtual things on real surfaces. When I was at Ford, around 2010 or 2011, I used only mocap,” recalls Baron. In addition to mocap, or motion capture, Baron later integrated more tracking technologies such as SteamVR.

Motion capture allows you to capture the physical action of actors and map them onto digital avatars. Although it results in highly realistic physical movements, it’s also costly due to its complex setup and space requirements. Later, Leap Motion’s small motion detector (price beginning around $90) became an easy and affordable way to implement hand-gesture recognition. In May, the UK-based Ultrahaptics snatched up Leap Motion for $30 million.

Today’s AR and VR gear with built-in depth cameras, motion sensors and location awareness makes mocap unnecessary in many cases. The headset’s own awareness of where it is, along with its ability to recognize and track finger joints, fills in the previously missing pieces. The gear makes it much easier to translate the headset user’s body gestures and movements into the virtual world, allowing software developers to add new physical-digital interactions for amusement as well as practical purposes.

“Hand gesture recognition is extremely important for AR-based maintenance in automotive and aerospace engineering. You want the user to be able to get inside an assembly and find wiring harnesses, for example,” says Baron. The headset’s ability to map its physical surroundings is also critical because, “in automotive, sometimes you want to look at a whole different virtual front on an existing car.”

Those who want to develop AR-based design review may consider building physical rigs with easily recognizable surfaces where virtual objects can be slapped on. Thus, the physical setup provides the tangible sensation (weight, mass or texture) of the imaginary product, while the view in AR or VR delivers the visual layer.

Avoid Unnatural Interfaces

AR, as the acronym suggests, allows you to augment reality with digital objects. As Baron has learned from her time at Ford, unnatural user interfaces prove to be detrimental to such use cases. “Don’t give someone a game controller, and tell them, hit that button to do X, swipe left to do Y,” she advises. “Avoid user interfaces that don’t work the way people would naturally work in the real world.”

In maintenance and repair exercises, users need to not only learn the correct placements but also build muscle memory—something software application developers often forget. In the virtual world, you may be able to punch through a cluster of pixels to reach for a wire harness in a tight spot. A technician trained to install or repair something in this unrealistic setup is liable to fail when confronted with the laws of physics in reality.

“Often, what you need to do to prepare for AR is not just technical; it’s also cultural,” says Baron. XR can let a mechanical engineer show a designer why certain pillars and wiring harnesses need to be repositioned to avoid collision, but if the company doesn’t have a collaborative culture that encourages mechanical engineers and designers to work together, outfitting them each with a pair of $3,500 HoloLens 2 smart glasses won’t help.

Read original article here.

Can Augmented Reality Make Everyone Experts?

Can Augmented Reality Make Everyone Experts?

Forbes: Tim Bajarin – June 14, 2019

I happen to be very navigationally challenged. For years, as I traveled around the world, I used a paper map to get me to the next meeting or location I was going to, and I would often still get lost. This trait was so prominent that my family and friends nicknamed me “Wrongway Bajarin.”

Due to my work in the UK with various publications, which started in 1984, I learned that London cab drivers were the least navigationally challenged cabbies in the world. For them to get their license to drive the famous Black Cabs, they had to memorize the entire map of London and its surroundings so they could get people to their locations the fastest way possible. In one of my first trips to London, I asked a Black Cab driver how long it took him to memorize the London street maps, and he said that it took him about a year before he could pass his license test.

Fast forward 35 years later and thanks to digital maps, GPS and various navigation tools, I could drive a London cab today and be just as much as an expert about London streets as those Black Cab drivers are today.

While the GPS and maps example illustrates how technology can help us become expert navigators, at least in cars, the introduction of AR tools will soon make it possible for many to become, if not an expert, at least more capable of doing things that we have no training or experience within many areas.
A good example of AR adding an untrained skill to your capabilities would be to use an AR app to help put together complicated furniture, assemble barbecues, electric bikes and a plethora of things we might buy that needs assembly.

Many makers of products are developing AR apps that can overlay on a product you buy from them and show you how to assemble that piece of furniture in a visual step-by-step process.  It is not a video of how to do it. An AR app overlays the assembly process on top of the actual product you are putting together and walking you through each step in an on-demand process.

I recently attended the Augmented World Expo (AWE) in Santa Clara, CA, and met with the President of Scope AR, David Nedohin. They had a sign in their booth that said “Anyone Can Be An Expert,” so I asked him what this meant.

It turns out that using AR integrated into various mixed reality glasses, their software platform can help companies make “experts” of non-experts which would help them identify equipment problems and if possible, even repair them on site using an AR program that overlays the repair steps on top of the actual equipment that needs to be fixed.

Mr. Nedohin told me that for many of Scope AR’s clients, if a piece of equipment in a factory goes down, typically an expert who may know how to fix it is somewhere far away. In the past, most companies dealt with this by flying an expert to the location of the problem to fix it onsite. With Scope AR, businesses can give the average non-expert employee on-demand knowledge with intuitive AR instructions. The company’s AR knowledge platform, WorkLink, gives companies precisely this kind of augmented reality support.

Another example of how Scope AR’s software works are how they are assisting Lockheed Martin engineers building NASA’s Orion spacecraft, a vehicle designed to travel to Mars.

“In the old way of doing things, an engineer may start with a 3,000-page binder full of instructions for how to build a specific aspect of the spacecraft. A technician searches the binder to find the correct fastener and memorizing the torque setting, before actually going in to tighten the fastener. Today that process is relatively cumbersome, slow, and could be subject to errors”, Nedohin explained to me.

“Using their Scope AR software, the workflow is designed with hands-free information viewed through a Microsoft HoloLens headset.  Using three-dimensional views with AR step-by-step instructions, the engineer can see what they need to do, what the torque setting is, and where the fastener goes.”

When Apple showed off new AR apps at their recent World Wide Developers Conference, they showed its use in a gaming app. When they originally introduced AR Kit, their software tools for creating AR apps, they also emphasized its use in gaming. While AR in games will be a big market, the most significant demand for AR now is coming from businesses who want to use it for internal and external training as well as for field service projects. This translates into the vision that Mr. Nedohin of Scope AR is proposing, that “everyone can be an expert.” I understand that, to some degree, this is marketing hyperbole. On the other hand, his vision is on target as AR and VR and mixed reality software and glasses can help one become proficient in a lot of areas where years of training used to be needed to repair sophisticated machinery, assemble equipment and even learn to operate them virtually.

AR and VR is still a nascent market, but after walking the exhibit halls of AWE, I can see that the tools and devices that support its use, especially in business applications, is moving at a rapid pace. A vision of making people experts in areas where they have had no training before is no longer a far fetched dream. This is why the business market, primarily vertically driven ones, will be where AR glasses will take off first before it ever gets broad acceptance in the consumer market outside of gaming.

Read original article.

Scope AR Upgrades WorkLink to Give Enterprise Workers New Insights into Processes and Training

Company adds session recording to its industry leading AR knowledge platform and announces continued enterprise customer growth with Becton Dickinson and Lockheed Martin

SAN FRANCISCO and SANTA CLARA, California, May 29, 2019 — Scope AR, the pioneer of enterprise-class augmented reality (AR) solutions, today launched at Augmented World Expo 2019 (AWE) an upgraded version of its highly-touted WorkLink platform. With the addition of session recording, WorkLink becomes the industry’s only AR knowledge platform to offer real-time remote support, access to AR work instructions and the ability to record sessions simultaneously in one application. With this, workers can now easily capture, retain and share knowledge like never before. Scope AR also announced new enterprise customer, medical device manufacturer Becton Dickinson, as well as expanded use of its integrated AR platform with Lockheed Martin.

“This is an exciting time for the AR industry. Adoption is growing and expectations among users are shifting towards more comprehensive, enterprise-ready solutions,” explained Scott Montgomerie, CEO of Scope AR. “With the latest WorkLink platform, we’ve added even more ways for workers to collaborate and quickly get the knowledge they need to successfully do their jobs. With the addition of session recording, businesses can now better capture and retain knowledge for future use and training purposes, while taking compliance, quality assurance and accuracy to the next level.”

The updated WorkLink platform can be customized with varying sets of functionality depending on customers’ needs. It can also be deployed across all major platforms and select industry wearables so organizations can use their device of choice. The platform is built to help make anyone an instant expert with seamless access to a variety of features including:

  • Session Recording to capture important knowledge delivered during live support video calls for retention, future sharing and new insight into additional training needs and how processes can be improved. Either the technician or remote expert can record a live session so real-time knowledge becomes a reusable asset that can be accessed by others in the future.
  • WorkLink Assist (formerly known as standalone product, Remote AR) for real-time expert remote assistance
  • WorkLink Create for quick and easy AR content creation for step-by-step work instructions

Beyond its latest product innovations, Scope AR has also experienced continued customer acquisition and growth on the heels of its $9.7 million Series A funding round in March 2019. Becton Dickinson, an American medical technology company that manufactures and sells medical devices, instrument systems and reagants, is the newest addition to the company’s already impressive client roster. Becton Dickinson will use WorkLink at the company’s Automation Center for Enablement to deliver AR instructions across the organization.

Additionally, Lockheed Martin is now expanding its use of Scope AR’s technology after its highly successful implementation of WorkLink to improve workforce training and spacecraft manufacturing procedures. They are now deploying Scope AR into all four of their business units across a broad variety of use cases.

Lockheed Martin’s Emerging Technologies Lead Shelley Peterson added, “Creating AR work instructions with WorkLink has enabled our Space team to reach unprecedented levels of efficiency and accuracy, as well as reduced manufacturing training and activity ramp-up time by 85%. Scope AR’s platform has proven to be so valuable that we have expanded our AR adoption into even more manufacturing applications within the Space division, as well as leveraging the technology in other areas of the business.”

The next-generation of the company’s WorkLink platform is available immediately, and attendees of AWE 2019 can see a demonstration of the new platform at Scope AR’s booth #213. For more information on the upgraded WorkLink application visit: https://www.scopear.com/solutions/worklink-platform/

About Scope AR
Scope AR is the pioneer of enterprise-class augmented reality solutions, delivering the industry’s only cross-platform AR tools for getting workers the knowledge they need, when they need it. The company is revolutionizing the way enterprises work and collaborate by offering an integrated AR platform that provides more effective and efficient knowledge-sharing to conduct complex remote tasks, employee training, product and equipment assembly, maintenance and repair, field and customer support, and more. The company’s device-agnostic technology supports smartphones, tablets and wearables, making it easy for leading organizations like Boeing, Toyota, Lockheed Martin, Honeywell, Assa Abloy, GE and others to quickly scale their use of AR to any remote worker. The company was founded in 2011 and is based in San Francisco with offices in Edmonton, Canada.

Media Contact:
Brittany Edwards
Carve Communications for Scope AR
Email: scopear@carvecom.com
Phone: 210-382-2165

Scope AR Closes $9.7 Million Series A Funding to Help Make Any Worker An Instant Expert with Augmented Reality

Thriving enterprise AR company continues growth and demonstrated success with Fortune 500 leaders including Lockheed Martin and Unilever

SAN FRANCISCO, March 20, 2019 /PRNewswire/ — Scope AR, the pioneer of enterprise-class augmented reality (AR) solutions, today announced it has secured a $9.7 million round of Series A funding. The round was led by Romulus Capital, with follow-on investment participation from existing investors SignalFire, Susa Ventures, Haystack, New Stack Ventures, North American Corporation and Angel List. Krishna Gupta of Romulus Capital and Wayne Hu from SignalFire will join Scope AR’s Board of Directors.

“AR is becoming an important tool for how knowledge is shared within heavy industry, allowing workers to get the information they need, when they need it, in an intuitive way,” said Scott Montgomerie, CEO and co-founder of Scope AR. “We are thrilled to have the support of our new and existing investors to accelerate our growth and development during a crucial inflection point in the market. It underscores, yet again, that enterprise AR is a leading driver within mixed reality thanks to the impressive ROI and growing list of use cases the technology enables.”

With this latest infusion of capital, the company has a raised a total of $15.8 million, which will allow the company to further scale and expand enterprise AR adoption in a time when the industrial workforce is shifting and machinery and equipment are becoming increasingly complex. The company is among the first to deliver noteworthy ROI from real-world customer use cases across aerospace, consumer packaged goods and manufacturing industries. Using the company’s products – WorkLink and Remote AR – industry leaders such as Lockheed Martin, Unilever and Prince Castlehave achieved impactful results around improving worker efficiencies, reducing equipment downtime and more accurately diagnosing repair issues.

“Enterprises are now realizing that leveraging AR and other agile, remote software solutions can be the answer to many operational challenges they have always faced — from closing the growing skills gap to reducing downtime,” said Krishna K. Gupta, founder and general partner of Romulus Capital. “Scope AR’s product leadership and vision has put them at the forefront of the industry, addressing these challenges with tools that provide workers with instant access to critical information that helps resolve operational issues in an agile and accurate manner. We’re excited about their product roadmap and growth opportunities as we work more closely with some of the largest enterprises in the world.”

About Scope AR
Scope AR is the pioneer of enterprise-class augmented reality solutions, delivering the industry’s only cross-platform AR tools for getting workers the knowledge they need, when they need it. The company is revolutionizing the way enterprises work and collaborate by offering AR tools that provide more effective and efficient knowledge-sharing to conduct complex remote tasks, employee training, product and equipment assembly, maintenance and repair, field and customer support, and more. The company’s device-agnostic technology supports smartphones, tablets and wearables, making it easy for leading organizations like Boeing, Toyota, Lockheed Martin, Honeywell, Assa Abloy, GE and others to quickly scale their use of AR to any remote worker. The company was founded in 2011 and is based in San Francisco with offices in Edmonton, Canada.