Watch the FULL playlist on Selling IIoT! bit.ly/Selling-IIoT
@MarkMcMillen21123 жыл бұрын
OK, several problems with this. Some plant data is discrete (like how many bins of acid are in the process area), but most process plants also generate a vast amount of time series data (think flow rate of caustic in gallons per minute). You need a historian to collect, time stamp and store that data and sometimes it needs to be fast (like 1 sample per second). A data lake is nothing but a big database and none of them are designed to collect time series data in an efficient manner. I've worked for a Fortune 500 company that operates a data lake. The plant historian was set up to collect process data for various types of analysis. The IT department insisted that we funnel that data to their data lake and all access to the data would be through their applications. After a couple of weeks, engineers realized that the resolution of time series data in the data lake was maybe 30 seconds at best, some far worse. Lots of different analysis requires data resolution down to a second, but IT was adamant that they could never store data at that resolution. After much gnashing of teeth, we negotiated them down to 10 seconds on some of the data only . You see, data lakes are just server farms and each server costs $$$. So they aren't sized to hold that much data and IT managers start having seizures when you send them 50,000 tags at 1/s resolution. You need a historian for that! So what happened at the plant? We redirected engineers to pull data for analysis directly from the historian, as it should be, and IT kept collecting their meaningless data at 1 minute res. Also, the crappy interface IT provided to view/pull data from their data lake was a hilarious joke. It was a great advertisement for systems like OSI, Proficy and IP21. You could be right about historians disappearing, but no way that happens in 10 years. 50 maybe. And recipe management. Yes SCADA's may include recipe management. So do all modern PLC/DCS architectures in some fashion and that's where it usually runs depending on your definition of SCADA. But I have already seen many, many companies doing the actual management of the recipe (meaning how much of ingredient A or B) at the process/manufacturing engineer level, which means they have to have a platform for editing/updating recipes that is "above" the control system and writes any updates down to the control system when appropriate. That's MES! This is an intentional means of decoupling the control engineer from recipe management. The control engineer doesn't care what the actual recipe is so long as it is working properly. The process or manufacturing engineer is the one who is interested in the details of a recipe and only wants to download updates when needed and doesn't typically have the expertise to do this in the control system/SCADA.
@4.0Solutions3 жыл бұрын
Thanks for the question Mark! We responded in this video here: kzbin.info/www/bejne/faPYf2iJZp1gp6s
@thecuriouskid37694 жыл бұрын
Hi Walker, Great Video!!! But however I still believe we will need a Historian where data of the range of 1 TB or more is generated a day on the plant. The cost sending this huge data to the Data Lake is overrunning the benefits by a factor of 2 or more. Using Historian would reduce the cost of data push significantly, which gains significance in these testing times.
@4.0Solutions4 жыл бұрын
We are adding this to our video suggestions. Thank you The Curious Kid! You should totally join our Discord Server. We think you will be right at home!
@rajdattbiradar6175 жыл бұрын
Nice video walker, am using Ignition historian module to dump the each & every shop floor data in to database for further analytics (Al/ML).
@walkerreynolds9735 жыл бұрын
Rajdatt, we do the same thing... but we use Ignition as the unified namespace, Canary Labs for the historian (using the Chirp! module to connect Ignition to Canary Labs) and then we pipe the data to AWS for storage--all AI/ML run in the IoT Hub
@martinjakobsen91293 жыл бұрын
@@walkerreynolds973 Thanks for the video Walker. If I understand correctly the data flow would be 1) current state data in UNS for structure, normalization and contextualization 2) data into AWS/Data lake for storage of everything?
@primi222 жыл бұрын
I had an argument with a Rockwell guy a couple of years ago, he was saying the future is bigger walls and less cross vendor integration. WTF.
@infomanav2 жыл бұрын
you are awesome, please do a session on cyber security for OT
@Nebul0us9 ай бұрын
Hey Walker, what about companies that don't want their data in the cloud for infosec reasons?
@ShaneWelcher Жыл бұрын
Another great video
@4.0Solutions Жыл бұрын
Thank you, Shane.
@jbreiter56 Жыл бұрын
How is an ML correlation or trend on “undefined data” actionable ?
@javimaci46153 жыл бұрын
Another excellent performance of Walker Reynolds & Zack ( I recognized your voice :) )
@4.0Solutions3 жыл бұрын
Thank you Javi!
@richarddousset60904 жыл бұрын
What about Osisoft PI as Data Hub Unified namespace (PI AF) ?
@4.0Solutions4 жыл бұрын
Very common question -- the PI Asset Framework can be used as a UNS but should not be. It should be a node in the ecosystem. We will answer this question is an upcoming video since it gets asked so often.
@johnpatanian15383 жыл бұрын
I believe some of the extra functionality that historian vendors have built on top of their storage needs to have equal or better cloud equivalents. For example Grafana or Tableau on to top of Athena over S3 is not a satisfying user experience compared with Canary Axiom or PI vision tools. I think Cloud vendors will get there however.
@4.0Solutions3 жыл бұрын
Thank you for sharing! I'm posting this comment in our Discord Server!
@DenisGontcharov Жыл бұрын
While I share this view, I fear reaching this objective will take way longer than we may ideally hope (as with most things Industry 4.0). Over the years, data historians in highly digitized operations like aluminum rolling evolved to do more than just acquire and store data. They also perform complex real-time calculations, often by transforming raw signals from the vendor’s specific sensors. Replicating this logic with open protocols requires reverse-engineering years (or decades) of incremental innovation on the part of the vendor. Although I believe eliminating vendor lock-in is desirable, it begs the question: who will do this reverse-engineering and at what cost? Could there be a place in the open architecture where integrating vendor-specific software makes sense after all? Or should any non-open vendor-specific software be ruthlessly eliminated at any cost?
@MrFalcon5175 жыл бұрын
Nice series of video. You didn't mention PI System in the description. OSI Soft rejects the historian label for their product, they have hierarchical organization of data and also the data can easily be consumed by any application. Would it be called a unified namespace system?
@4.0Solutions3 жыл бұрын
It is not a unified namespace, no -- it is a custom namespace. The primary issue is that asset frames and event frames cannot abstract all data... AND Pi is really designed to consume data for time series purposes and has no mechanism to broker subscribers and update them on event changes over an IIoT protocol.
@ZackScriven5 жыл бұрын
Love this video!
@boys73714 жыл бұрын
Influx is a quite interesting company, I have seen the way to how did it overtake with Prometheus.io in many ways... hope they can put more attention for industrial dataset