Thanks to everyone for joining us today, on the B2B Manufacturing Marketing Executive podcast, by RH Blake. Today, we are joined with Stephen Gold, from SparkCognition. Stephen, thank you so much for joining us today.

Interview with Stephen Gold

Welcome to the B2B Manufacturing Marketing Executive Podcast.

If you want to learn how leading B2B Marketers are achieving excellent growth results, you are in the right place. This episode is brought to you by R. H. Blake a leading B2B and manufacturing focused marketing agency. To learn more, visit rhblake.com. Now here is your host Dan Konstantinovsky.

Connect with Stephen Gold, SparkCognition

Interview with Stephen Gold

Welcome to the B2B Manufacturing Marketing Executive Podcast.

If you want to learn how leading B2B Marketers are achieving excellent growth results, you are in the right place. This episode is brought to you by R. H. Blake a leading B2B and manufacturing focused marketing agency. To learn more, visit rhblake.com. Now here is your host Dan Konstantinovsky.

Connect with Stephen Gold, SparkCognition

Hello, my name is Dan Konstantinovsky and welcome to another episode of the Manufacturing Marketer podcast. We are honored to have Stephen Gold here of SparkCognition. Stephen is a seasoned technology executive with in-depth experience with Fortune 100 and growth stage organizations over 30 years of experience across SaaS applications and analytics and Iot currently is the CMO of SparkCognition, and he’s been instrumental in helping the solution provider achieve unicorn status.
Previously, he was the GM of Honeywell’s 2.5 billion Iot and software business leading digital transformation there. And then prior to joining Honeywell, from 2011, he was with IBM Watson AI platform, where he was one of the principal business architects, delivered next generation technology. Suffice it to say, extremely specialized and deep experience in this space. Stephen, thank you so much for joining us.

Hey, Dan, it’s a pleasure to join you.

So maybe if it’s okay, could you just add a high level? You know, many of our listeners, you know, obviously from familiar with AI, but to different degrees. What is computer vision or visual AI for for the maybe the novice?

Yeah. Dan, I mean, we, we kind of banter about the term artificial intelligence or AI as if it’s a single technology. But when we really look under the covers, what we see is a capability that really looks to mimic human capabilities, human human behavior. So when we think about the way we we think and process information that really relates to what we would call machine learning or deep learning, when we think about understanding human speech, that’s an area of AI called natural language processing.

And when we think about understanding what we see and how we process images, that’s really computer vision.

Thank you. And so, you know, some of these topics maybe feel more abstract than what many of these folks are doing on a daily basis. How does it work? 

Yeah, great, great question. And think, you know, the biggest difference we see with artificial intelligence is these are systems that are trained, not programmed. So historically, we would sit down and through various programming languages, we would define the parameters and the objectives and and ultimately we would control the output with AI. It’s really more related to how we as individuals grew up and we learned. So the AI gets progressively smarter. So physically what we do and there’s different ways of training, but effectively we give the information, we ask it a question, it gives us a response, we tell it if it’s right or it’s wrong, and it learns.

So in the case of computer vision, what we would do is we we would give it a a feed, a video stream, and we would in one way to do this is we could tag the image and say, this is a person, this is a shirt, this is a shoe, this is a car, this is blue. So we can identify objects and colors and sizes and shapes and patterns, and we literally teach it. And then with, you know, additional information and input, it gets it gets smarter as the model progresses.

And that’s really the beautiful part about AI. It’s less about where you begin the journey and more about the more about the the journey and not the destination.

Excellent. So if I’m a manufacturer, you know, I’m probably thinking this is an incredible technology, lots of potential applications. But what are some specific problems or use cases that they should be thinking about for this type of technology?

When I think about computer vision or what we’ll call visual AI, I really break it down into five categories. There’s security there, safety there is the productivity aspect. There is a quality or an inspection component, and then there is what we reference as situational awareness. So if we think about each one of those and manufacturing security, do I know who’s in the plant? Do I know where they are? Do I have a count of people if there’s an emergency, are people, you know, accessing the facility through a restricted door? Are people in certain sections authorized to be there that they have the proper credentials on the safety side? Really important.

Everything from near misses. You know, you don’t want a forklift at a human to come in contact to activities on the loading dock to counts of pallets and making sure that they’re not stacked too high or stored in an area that would potentially be compromised. I’m looking for slips, trips and fall hazards. I’m looking for a man down situation, you know, someone working in the warehouse in a remote area and they’ve had a medical situation or, you know, perhaps something has fallen on them all.

Those are easily detectable productivity. I’m looking for process throughput. I’m looking for patterns. If I’m moving things from the loading dock to to storage, from storage to production, is it an optimized path? Are there obstacles I commonly encounter? These are all discoverable by by the AI, and then quality is a great a great example of where you can use computer vision. If we can see it, so too can the eye.

So everything from short fills to box counts to to damage products, you know, can be be discovered. And finally, when I say situational awareness, what we’re talking about are things like ergonomics or are people bending wrong and potentially going to, you know, to injure themselves. We can easily discern that that type of movement, facial recognition, you know, is that a known person or is that someone, you know, potentially visiting or maybe it’s someone that shouldn’t even be in the building? So within manufacturing, I would say probably the most common starting point for computer vision is really in the area of of, you know, it’s dealing with health and safety.

Um, and, you know, it’s dealing with the types of things I mentioned earlier the, the near misses, the, the hazard detection, the man down situation, you know, you know, currently within SparkCognition, we have built out over 125 different use cases that we are able to support for a manufacturer.

To talk a little bit about the predictability. So, you know, a lot of those examples you mentioned, we’re identifying things that maybe are happening. Talk a little bit about how it could help me avoid a potential risk, you know, within those some of those use cases.

Yeah, Dan, if you if you look at the the environment today, most of the manufacturers in fact, I would actually say all of the manufacturers we work with have an existing investment in in cameras and they haven’t positioned throughout their facility for various purposes. But when you look at what those cameras are doing, they’re largely recording information for posterity’s sake. They’re they’re using it for a post mortem examination. So if there is an incident, to your point, they’ll go back and review the video and they’ll, you know, determine what was the cause.

And then maybe they’ll enact a new policy to hopefully prevent. But it doesn’t do anything in the moment to prevent it from happening. So the minute you go to a visual eye, it’s going to leverage the exact same system. It’s going to use those cameras and those video feeds. But the big difference is going to analyze information in the moment. So everything is being processed in real time. And this is where I think your question is so important is what do you do with that information and what the AI can do? And the nature of artificial intelligence is really this idea of being proactive so it can see a situation where perhaps in a particular warehouse there is a man and a machine and they are operating and working independently.

But as that machine and that individual, you know, come in close proximity, the AI is is, you know, understanding that moment, that situation, it’s evaluating it. And at a critical point, it’s going to provide an alert. It can it can signal the operator, you know, it can text them. It can it can sound an alarm, a hooter. It can trigger a strobe. It can literally send a code to shut down the machine to avoid the incident.

So this this is really that that predictive proactive part of the equation. And when you think about what’s at stake, besides the potential loss of human life, a typical injury in a manufacturing environment, uh, you know, runs direct costs around $40,000, the indirect cost can be 2 to 10 times that. So having to shut down a facility, having to do the investigation, having to do corrective action in a potential medical expenses cost, business interruptions, that doesn’t even begin to count the qualitative effect of, you know, kind of the worker psyche and, you know, worker retention, worker acquisition.

This is one of the reasons I love when I walk into a facility and they talk about the number of days they’ve operated without an accident. They appreciate how important it is. But what’s not really being captured is all of these potential near misses, you know, liquid spills on a floor that represent a hazard. The eye can detect that, send an alert, have that remedied before anybody gets injured.

Excellent. You mentioned quite a few benefits there. What are some considerations? So if I’m a manufacturer that I should be thinking through on maybe whether I’m ready to deploy, I, you know, for health and safety in my environment.

The way I would think about it is you’ve already made an investment. As I said, every manufacturer we work with, large and small, have created an infrastructure, including the cameras, the network, the what we would call the, you know, the video management system, the recordings themselves. So, you know, the a big part of the the lift the effort is already there. The the strength of AI is it really integrates and overlays. But, you know, there are different ways to approach this problem.

One of the ways you can look at instituting AI is in the camera itself. Typically the downside of that is that most manufacturers, you know, aren’t going to rip and replace every single camera or half of the cameras. And certainly they don’t want to have to entertain in 2 or 3 years and upgrade to all their cameras because the AI is physically embedded in the device itself. So it is an option. It’s not an option that we necessarily see frequently adopted for broader deployments.

So then you get into the, you know, pick a pick a software vendor and like everything in life, not everything is equal. There are software solutions that are largely dependent upon other technology, third party technology. The downside there is they can’t really adapt and control what the software is doing to the extent that may be required. You know, I think in the case of our cognition, our approach is a little bit different, is we’re an first company. So we’re a company, you know, with portfolio well over 200 assets, patents and patent pendings.

And the significance of that is that we really are able to control the experience that we deliver to the manufacturer. So I think the recommendation is to, you know, to partner with the vendor that meets your specific requirements, partner with the vendor that can extend the values associated. We talked about the use case of some of the examples. Again, not not every supplier is going to be able to support the diversity of use cases from health and safety and security and productivity. They may focus on on just one use case or just one area.

And again, it’s got to align to the objectives of the business. But I always share, I think, with our manufacturers, the first step is in selecting the vendor. The first step is getting educated, getting comfortable as to what are the capabilities of the technology and candidly, what are the limitations of it.

Excellent.
Yeah. Great. Once a company gets educated and finds, you know, a partner and a solution provider like yourself. What is the typical timeline for implementation and deployment?

Yeah, the nice part, Dan, is that when we look at this particular type of AI computer vision, um, because of the fact that the infrastructure is in place, the cameras already exist, that recordings are readily available. What’s really required is simply capitalizing on that, that recording. And so the implementation time for this form of AI is actually very short. It’s measured in weeks. And what what physically happens is typically a quick site assessment to understand what cameras are there, where are they placed, are they going to be able to capture what the objective is? So if you’re looking to understand movement through the warehouse, are the cameras properly positioned to to see what it is that would be useful? So, you know, we begin with a quick site assessment.

There may be some some shifting of cameras and or the field of view of the camera. Are you able to see an exit door, for example, or do I need to adjust? That’s a relatively trivial process. It moves very quickly. The software gets installed on a device, a server at the edge. And what I mean by at the edge is, is really everything that I’m describing is running within the facility. This turns out to be really, really important for two reasons. One is performance. You don’t want latency.

If you have a situation where a potential situation’s going to occur in in moments, mean a second or two, sending it to the cloud, sending it back, that latency could compromise the value. So everything we do processes at the edge. And then the second important point at the edge is privacy. Every organization we work with is concerned as to who will see the data, who has access to the data. If it’s a union shop, they are very, very interested in terms of what is being done with that data. So maintaining that data at the edge is really, really important to the process so that hardware gets set up, simple server software gets loaded.

And then what we do is in a no code environment and I’ll explain that we load up the, the experience. So what we physically do and say no code is we associate the use case with the camera. We literally drag and drop with a mouse and say, okay, this camera is going to be looking for, you know, the number of people who enter and exit. And it’s going to be doing identification to ensure that they are permitted into the building. You know, this camera is going to be looking extremely for for autos, for trucks, for license plates, loading dock utilization.

These internal cameras are looking for hazards. They’re looking for things in the aisles, pallets that were misplaced, boxes that present tripping problems. And so you literally drag and drop onto each camera the use cases you associate. Now, that process takes, you know, a couple of days and then you tune it. So you actually again, these are a learning system. So you run experiments very quickly and say, okay, let’s let’s run a forklift down the aisle, make sure it recognizes the forklift, it picks up, it can recognize speed, it can recognize the operator.

So you run some experiments. That total process, end to end is weeks. It could be 2 to 3, 2 to 4 weeks, depending on the number of use cases deployed. The reason that that that’s really so important is that anybody who’s done work in artificial intelligence will tell you that the the the typical AI project took months, if not years. So this was really a game changer in terms of the benefit they can derive and the return on investment they get in a really short period of time.

So so on that point around return on investment or maybe some other KPIs, what are some objectives that you see customers looking for in a technology like this? And maybe, you know, talk a little bit about how we can help address those.

Think with any technology. You know, at the point of decision. The one thing the organization wants to know is, you know, what is time to value? When do I return my investment? What does that return on investment look like? And think you can divide between the quantitative and qualitative? The quantitative. You know, if you’re deploying this from a safety point of view, you can look at at accident avoidance. And organizations are really good. They know the number of accidents.

They have a number of perhaps a number of near-misses they have. They know what the cost for them that is associated. There’s a tremendous amount of of OSHA data and other third party data that’s available as it relates to injuries. And so you can begin to quantify very quickly that if you can avoid accidents, you know, there’s a numerical value you can associate with that security organization’s theft.

They they had, you know, ports that were leaving the facility in a nonconventional way. They were being stolen, but they weren’t really aware as to how they were leaving the facility. And so, you know, one of the things we were able to do is, is to identify individuals off hours entering and leaving the building. You could see them coming, You know, at those time, you can see unauthorized access and then you can see them leaving, you know, carrying boxes. And and while you say, well, didn’t they have security? They did.

But, you know, security personnel are not omnipresent. They can’t be everywhere in the facility. In many cases, these are very large, you know, hundreds of thousands or millions of square feet of warehouse. There’s no way that a small crew of professionals are going to be able to to properly police that. So that’s a great example where, you know, they could qualify the loss of goods and that prevention obviously netted them, you know, retention of the park and profitability and they could equate it. But then you get the qualitative side.

Qualitative side is, you know, your ability to to retain top talent, your ability to attract top talent. You know, it’s somewhat dependent upon your reputation. If you as an organization are not seen as an environment that is, you know, a a productive work environment from a health and safety standpoint, people aren’t gonna want to come to work. Worse yet, you’re not going to be able to attract new talent. And so while it’s more difficult to to quantify those types of things, reputation, brand matter.

And then of course, you know, if if you have had situations where there are accidents, for example, you know, there is, you know, medical costs or liabilities associated, there’s workman’s comp claim, you know, those are all on the health and safety. When I look on the other side of the equation, productivity inspection, again, you know, we had situations where, you know, we you know, customers have said, you know, we want to create a great consumer experience. And, you know, the way our product appears is really important In this case.

It happens to be a potato chip manufacturer and we use our visual inspection to identify defective chips. We’ve seen some of those as consumers. They have, you know, they have green spots or brown edges. And they used to do that with with literally, you know, individual standing there, you know, through each shift, looking at at, you know, countless numbers of potato chips. Now the visual does that and manages that process. And so the way I think about it, everything we described today is really about how do we augment human intelligence, how do we make the human more effective in the process?

Excellent. Well, um, this has been fascinating. And maybe just the last question here. I just feels like it’s, you know, every other word from the news channel, you know, is accompanied with the words or what else is happening out there? Where do you see it? You know what else is happening? Maybe where do you see it going? You’ve got this unique perspective. Could you talk a little bit about that?

Sure. And you know, your question couldn’t be more timely, I think. You know what what has been dominating the news lately is this notion of what we call generative AI. Now, historically, what we’ve dealt with is what we call discriminative AI. So if we think about predictions using data to formulate a point of view of what’s likely to happen in the future, that’s all a discriminative process. Generative is this notion of being able to use a data set and create new insights from that data.

So that’s the generative part. You’re generating new data. Many of us may have experimented with Openai’s ChatGPT you put in a question and you get a response and it’s really amazing. And I think the the implications of this particular technology generative AI will be far reaching in business. So, you know, a good example would be in manufacturing. Um, you know, all of the, the repair orders, the service tickets, all of the, the information that that’s been documented and collected and perhaps stored somewhere.

How do you capitalize on that? And generative is a great way to say, hey, um, you know, a forklift has broken, uh, the lift itself won’t move forward or backwards, but it will start and could turn the wheels. But I can’t put it in the air. And it will literally look through effectively the corpus, the data set, and it will come back and say, well, it’s probably one of three things. And that process today is really cumbersome to cull through all of that information.

And so I think the generative is is going to really accelerate the rate at which we can understand the data that that’s available to us. I think it’s very complementary to the things we talked about in the podcast. It’s not an either or. I think predictive computer vision generative actually come together to provide a very powerful catalyst to to improving business productivity, to improving uptime of machines and operation to improvement, to improving productivity of the plant and ultimately to to really achieve helping the company achieve the financial performance that they’ve committed to their stakeholders.

So I think it’s an exciting time for artificial intelligence.

Absolutely. Stephen, thank you so much for the opportunity to chat with you. Really interesting insights. Fascinating space, you know that many of us don’t encounter every day. So really appreciate your perspective and your insights and your time today. Thank you so much.

Oh, absolutely. My pleasure. Thank you, guys.

Thank you for listening to the B2B Manufacturing Marketing and Executive Podcast. You can find today’s show notes at rhblake.com/podcast that’s rhblake.com/podcast. Today’s show was sponsored by R.H. Blake. A leading B2B and Manufacturing focused Marketing agency. Learn more at rhblake.com.