Every business is shaped by “economies of scale,” the principle that there are operating cost advantages related to its size or (in manufacturing, particularly) production volume. Manufacturers more typically express this concept as “productivity”: how can revenue be improved without an adverse affect on capital and operating costs?
But, “productivity” focuses on an enterprise’s specific costs and tends to deemphasize a salient point about “economies of scale,” that is, the factors shaping an enterprise’s competitive position are always changing. For that reason, larger businesses have advantages over small and mid-sized businesses, particularly in manufacturing — and particularly now that acquiring and processing data has emerged as a factor determining competitive standing. From sourcing energy and materials, to evaluating product or process design, to exploring new product and market opportunities, managing Big Data is a big challenge to small and midsized manufacturing businesses.
Now a startup business is commercializing technology developed at Argonne National Laboratory and the University of Chicago to provide “supercomputing-as-a-service” to SMBs, helping them to achieve economies of scale in managing the expanding volume of information that is constantly redefining businesses and industries.
“High Performance Computing, or HPC, refers to the use of many computers working concurrently (‘in parallel’) to solve large-scale computing problems, such as simulating the operations of a manufacturing plant,” explained Michael Wilde, CEO of Parallel Works Inc. “If, for example, you have 10,000 computers available, and can thus evaluate 10,000 designs at a time, then a computation that would have taken 1 million minutes (almost two years) can now be done in under two hours. That's high performance computing.”
Parallel Works is offering that capability to SMBs in a Cloud-based HPC platform via a "software as a service" model, typically to be accessed from a Web browser. “Users select from a set of pre-written ‘workflow apps’ developed by third-party vendors, or even by the customer's own IT staff, to solve specific problems product design, manufacturing process improvement, assembly or end-customer application simulation,” Wilde detailed.
Democratizing HPC
Making such functionality available to SMBs is partly an effort to “democratize high-performance computing,” according to Parallel Works’ statement. Similar publicly available capabilities are offered by Amazon, Google, and other vendors, “but there’s still a gap in terms of the expertise that an SMB needs to actually use the Cloud,” Wilde noted. The task for Parallel Works is “to connect the cloud to (an SMB’s) critical business and engineering processes.”
What the manufacturer will find, he said, is an ability “to effectively harness the power of parallel HPC … with a minimum of IT expertise needed, with minimal time delay, and on a pay-as-you-go basis with little or no up-front investment.”
The need for supercomputing capability is real, and expanding, according to Parallel Works. Current manufacturing supply chains are realigning not only according to the variable production costs of individual competitors but also according to the needs of customers, regulatory changes, and in particular design changes. HPC allows the manufacturer to evaluate vast numbers of design variations in advance of prototyping. Following that, there are comparable advantages to evaluating differing production set-up possibilities.
Consider HPC in a machining operation, where it will mean that a product design can be evaluated versus any number of tool-path options, to identify the optimal production set-up.
Wilde also identified advantages of HPC in a business’s marketing efforts: “Many products need to be simulated in the sales process,” he noted. “For example, the performance yields expected from advanced building products, better industrial chemicals and other advanced materials, vehicles and vehicle parts, all may need to be simulated to predict their performance and ROI to the customer in specific application contexts.”
An important advantage to Parallel Works’ HPC offering would be the ready availability of the technology update: “Our vision is that HPC systems would run software tools called ‘optimizers’,” according to Wilde, “which quickly and intelligently explore and analyze a vast number of possible process improvements for a product design or a factory or manufacturing system or subsystem.”
What Wilde is describing is typically called a “design exploration service,” which applies to the design of and manufacturing processes for a product, and to its end-customer application. Such optimization processes would run periodically in parallel to existing process control and MES functions, but periodically, “as new lines are created, new products are introduced, or as model mixes change over time.” The optimization process would interact with PLM, ERP, and related systems in the enterprise, to set new MES parameters or execution plans that increase process performance.
There is no hardware to be installed by the manufacturer: all the parallel computing hardware typically lives in and/or is accessed via, the cloud. “You only pay for the computing you need, and you get as much of it as you need, when you need it,” Wilde noted. “That’s a great reduction in capital expense and in the specialized and costly IT expertise you need to select, purchase and operate such parallel equipment.”
The economies of scale may be shifting for manufacturers of all sizes, but the scope and influence of data is not “scalable”: It is only increasing. As a consequence, there is a growing need for business decisions to access the resources of HPC, and so adopting “supercomputing” may be seen as a productivity strategy, rather than just an investment option.