Leveraging Workload-Driven Infrastructure and Software Defined Architectures for Data-Driven AI/ML and Analytics Applications
Scheduled for July 14, 2020, 1 pm to 2:00 pm EDT
Data is being generated in larger volumes and faster rates, causing congestion, I/O bottlenecks, storage outages and cost overruns for Artificial Intelligence (AI) and Machine Learning (ML) workloads. As data-intensive workloads scale, it’s critical to implement data-driven software defined architectures to meet the demands of large data sets. Optimized, accelerated data platforms promise an immediate and tangible solution for delivering discovery and insight from machine-generated data. Optimized data platforms combined with accelerated compute and the right software creates a new storage category. This approach provides a data store that delivers an enterprise ready, unified data platform that performs across your entire environment, from edge devices to your core data center. This type of platform is a requirement for data management in the era of AI.
This session will educate the HPC, AI/ML and Analytics communities on the properties of Optimized Accelerated DataOps and will foster a discussion of its adoption for high-performance workloads like AI, ML and analytics. The discussion will focus on new infrastructure solutions leveraging NVMeOF, software defined architectures to handle large volumes of data generated by mixed workloads and advise attendees on the use cases that would benefit from an Optimized Accelerated DataOPs platform(i.e. Advanced Driver Assist System (ADAS), National Language Processing, Edge to Core, Digital Transformation.)
Kevin Tubbs, Ph.D., Sr Vice President Strategic Solutions, Penguin Computing
Kevin has over fifteen years of High Performance Computing (HPC) experience in various areas ranging from software development and application performance characterization and optimization to hardware and systems level deployment and management. Kevin has ten years of experience in GPGPU and accelerator programming and heterogeneous computing solution architecture design. Kevin also has expertise in computational fluid dynamics, computational science, numerical modeling and engineering simulation focused on HPC, AI, and heterogeneous computing implementations. Prior to joining Penguin Computing, Kevin serviced as an HPC consulting and performance engineer for a variety of organizations including Dell, Inc., High Performance Technologies, Inc., the Naval Research Laboratory (NRL), and the Center for Computation and Technology (CCT) at Louisiana State University (LSU). His clients and customers have included, multiple fortune 500 companies, research universities, and government organizations. Kevin’s current focus is providing end-to-end technology solutions.