Dean Wampler

Share

Architect at Typesafe
 deanwampler.com
 @deanwampler

Dean Wampler, Ph.D., is a member of the Office of the CTO and the Architect for Big Data Products and Services at Typesafe. He uses Scala and Functional Programming to build Big Data systems using Spark, Mesos, Hadoop, the Typesafe Reactive Platform, and other tools. Dean is the author or co-author of three O’Reilly books on Scala, Functional Programming, and Hive. He contributes to several open source projects (including Spark) and he co-organizes and speaks at many technology conferences and Chicago-based user groups.

YOW! West 2014 Perth

Reactive Designs & Language Paradigms

KEYNOTE – VIEW SLIDES WATCH VIDEO

Can reactive designs be implemented in any programming language? Or, are some languages and programming paradigms better for building reactive systems? How do traditional design approaches, like Object-Oriented Design (OOD) and Domain-Driven Design (DDD), apply to reactive applications. The Reactive Manifesto strikes a balance between specifying the essential features for reactive systems and allowing implementation variations appropriate for each language and execution environment. We’ll compare and contrast different techniques, like Reactive Streams, callbacks, Actors, Futures, and Functional Reactive Programming (FRP), and we’ll see examples of how they are realized in various languages and toolkits. We’ll understand their relative strengths and weaknesses, their similarities and differences, from which we’ll draw lessons for building reactive applications more effectively.


Reactive Designs & Language Paradigms

WORKSHOP

Spark is a Scala-based distributed computation environment for “Big Data” that is emerging as a replacement for Hadoop MapReduce, because Spark offers significantly better performance, greater flexibility for implementing algorithms, and the power of functional programming combinators, all while interoperating with other Hadoop tools, such as HDFS (Hadoop Distributed File System). Spark applications can be written in Scala, Java, Python, and soon, R. A number of specialized libraries in the Spark ecosystem are built on this foundation, including a SQL query tool for flat-file data called Shark, a graph processing system called GraphX, and a machine learning library called MLI, among others.

WHAT WILL YOU LEARN

This hands-on workshop will introduce you to writing Spark applications to solve real-world Big Data problems. We’ll also learn how to use Shark, GraphX, and MLI, and discuss a few other tools in the ecosystem.

WHO SHOULD ATTEND

Software developers, architects, data analysts, architects, database analyst, technical leaders and anybody with an interest in big data and/or functional programming.