CLOUD/ICWS/SCC/MS/BigData Congress/SERVICES 2015 Keynotes
Keynote 1: A Holistic View of Software Evolution toward Software as a Service (Carlo Ghezzi, Politecnico di Milano, Italy)(06/28 Sunday, 9:40-11:30 Gallery 8)
Keynote 2: Big Data as a Service at NASA (Tsengdar J. Lee, NASA, USA)(06/29 Monday, 10:50-12:00; Gallery 8)
Keynote 3: Data Analytics as a Service: The Next Big Challenge in Services Computing (Ling Liu, Georgia Tech, USA) (06/30 Tuesday, 10:50-12:00; Gallery 8)
Keynote 4: Kubernetes and the Path to Cloud Native (Eric Brewer, VP of Infrastructure at Google, UC Berkeley, USA) (06/30 Tuesday, 14:10-15:20; Gallery 8)
Keynote 5: Cognitive Computing at IBM Research (Guruduth Banavar, VP of Cognitive Computing, IBM Research) (07/01 Wednesday, 10:50-12:00; Gallery 8) Keynote 6: The Role of the Cloud, its Software Stack and Big Data Analytics in Scientific Research (Dennis Gannon, Indiana University, USA) (07/01 Wednesday, 14:10-15:20; Gallery 8)
The current trend of Software as a Service has posed unprecedented challenges and demands to software evolution that has been an endemic phenomenon for decades. Software needs to evolve because requirements change, because the environment with which it interacts changes or because the platform on which it runs changes, or because the applications which may use it may change. Continuous evolution is intrinsic in iterative and incremental (agile) development and has to continue after systems are released. Increasingly, software evolution has to take place at run-time, while the software is running and providing service. There is also an increasing demand for software that can self-adapt to changes. Existing approaches to software development need to be rethought to respond to these challenges, which require both extreme flexibility and dependability. The traditional separation between development and operation (design time and run time) blurs and even fades. The talk especially focuses on modeling and verification, which need to be rethought in the light of perpetual development and evolution. It also focuses on achieving self-adaptation to support continuous satisfaction of non-functional requirements---such as reliability, performance, energy consumption---in the context of virtualized environments (cloud computing, service-oriented computing).
Carlo Ghezzi is an ACM Fellow (1999), an IEEE Fellow (2005), a member of the European Academy of Sciences and of the Italian Academy of Sciences. He received the ACM SIGSOFT Outstanding Research Award (2015) and the Distinguished Service Award (2006). He is the current President of Informatics Europe. He is a regular member of the program committee of flagship conferences in the software engineering field, such as the ICSE and ESEC/FSE, for which he also served as Program and General Chair. He has been the Editor in Chief of the ACM Trans. on Software Engineering and Methodology and is currently an Associate Editor of the Communications of the ACM, IEEE Trans. on Software Engineering, Science of Computer Programming, Computing, and Service Oriented Computing and Applications. Ghezzi’s research has been mostly focusing on different aspects of software engineering. He co- authored over 200 papers and 8 books. He coordinated several national and international research projects.
Big data challenges are sometimes viewed as problems of large-scale data management where solutions are offered through an array of somewhat traditional storage and archive theories and technologies. These approaches tend to view big data as an issue of storing and managing large amounts of structured data for the purpose of finding particular subsets of interest. Alternatively, big data challenges can be viewed as knowledge management problems where solutions are offered through an array of analytic techniques and technologies. These approaches tend to view big data as an issue of extracting meaningful patterns from large amounts of unstructured data for the purpose of finding particular insights of interest. As the community grapples with its scaling challenges it seeks to find a balance between these competing views. In this talk, I will discuss the research and development efforts at NASA and how we may strike a balance between these two competing views through enhanced interoperability.
Tsengdar Lee is the program manager for the NASA High-End Computing Portfolio. He maintains and modernizes the high-
end computing capability to support the aeronautic research, human exploration, scientific discovery, and space technology
missions at NASA. He is also the Program Scientist for the NASA Weather Focus Area. In this role, he is responsible for the
strategic direction in NASA’s weather research and development Portfolio.
He Served as NASA Chief Technology Officer for Information Technology between 2011 and 2012. He set up the IT-Labs at NASA and invested in cloud computing and big data projects.
He jointed NASA in 2001 as the High-End Computing Program Manager for the Earth Science Enterprise. His work primarily focused on weather and climate computational modeling. Between 2002 and 2006, he managed the Earth Science Global Modeling Program. He funded research efforts to study the global climate change, weather forecasting, and hurricane prediction challenges.
Services computing research is entering an exciting time. Data has been the No. 1
fast growing phenomenon on the Internet for the last decade. Engineering big data analytics demands significant advances in
delivering data analysis as a service (DAaaS). Although we have witnessed the success of delivering numerous hardware
infrastructures, computing platforms and software applications as outsourced and managed services, big data analytics have not yet
been packaged and outsourced as large scale “dialtone” services due to (i) the complexity and high cost involved in using
conventional analytic platforms and tools to connect big data with rich models of analytics, (ii) the need for innovative
approaches to perform cross-layer and cross-network correlation and knowledge discovery , and (iii) the lack of support for
elastic and multi-tenant analytics that can seamlessly scale with the exponential growth of big data and the evolving demands of
In this keynote, I will explore the research opportunities and challenges from multiple dimensions towards data analytics as a service (DAaaS). For example, from the perspective of analytic algorithms, I will discuss Multi-tenant Analytics and Cross-Network Analytics through generalization and customization, aiming at developing a methodical approach and a suite of customizable analytic models that can abstract and generalize common analytic problems into black-box and grey box analytic tools and analytic software libraries. From the perspective of analytic systems, I will discuss Elastic and Scalable Analytics through smart data and computation partition and parallelization, and the opportunities of using unconventional data structures and unconventional software architectures.
Ling Liu is a Professor in the School of Computer Science at Georgia Institute of Technology. She directs the
research programs in Distributed Data Intensive Systems Lab (DiSL), examining various aspects of large scale data intensive
systems, including performance, availability, security and privacy. Prof. Liu is an IEEE Fellow, a recipient of IEEE Computer
Society Technical Achievement Award in 2012. She has published over 300 international journal and conference articles and is a
recipient of the best paper award from a number of top venues, including ICDCS 2003, WWW 2004, 2005 Pat Goldberg Memorial Best
Paper Award, IEEE Cloud 2012, IEEE ICWS 2013, Mobiquitous 2014, ACM/IEEE CCGrid 2015. In addition to service as general chair and
PC chairs of numerous IEEE and ACM conferences in data engineering, very large databases, distributed computing, cloud computing
fields, Prof. Liu has served on editorial board of over a dozen international journals. Currently Prof. Liu is the editor in
chief of IEEE Transactions on Service Computing. Prof. Liu’s current research is primarily sponsored by NSF, IBM and Intel.
We are in the midst of an important shift to higher-levels of abstraction than virtual machines. Kubernetes aims to simplify the deployment and management of services, including the construction of applications as sets of interacting but independent services. We explain some of the key concepts in Kubernetes and show how they work together to simplify evolution and scaling.
Eric Brewer is a vice president of infrastructure at Google. He pioneered the use of clusters of commodity servers
for Internet services, based on his research at Berkeley. His “CAP Theorem” covers basic tradeoffs required in the design of
distributed systems and followed from his work on a wide variety of systems, from live services, to caching and distribution
services, to sensor networks. He is a member of the National Academy of Engineering, and winner of the ACM Infosys Foundation
award for his work on large-scale services. Eric was named a "Global Leader for Tomorrow" by the World Economic Forum and “most
influential person on the architecture of the Internet” by InfoWorld
Guruduth Banavar is vice president of cognitive computing at IBM Research, responsible for creating the next
generation of cognitive systems in the Watson family. He has worked across IBM’s businesses to co - innovate with clients, for
example, to build a city operations center in Rio de Janeiro. Guru has served on Governor Cuomo’s commission for improving New
York state’s resilience to natural disasters after Hurricane Sandy. His work has been featured in The New York Times, The
Economist, and other international media. Earlier, Guru was the Director of IBM Research in India, which he helped establish as a
pre-eminent center for Services Research and Mobile Computing. There, he and his team received a National Innovation Award by the
President of India in 2009 for the Spoken Web project. His early work was on distributed systems and programming models at IBM’s
TJ Watson Research Center in New York, which he joined in 1995 after his PhD in Computer Science.
The sciences are currently undergoing a fundamental transition due to the avalanche of data that is generated by instruments, simulations, on-line archives and social media. The impact of the data revolution is now seen in every academic discipline. Cloud computing and many big data processing techniques were originally invented to manage the data challenges faced by the Internet companies. These challenges ranged from email services to Internet search. An important outcome of these data analysis challenges has been a revolution in massive scale machine learning. These advances have enabled automatic natural language translation powerful computer image understanding and deep semantic analysis of text. In addition the cloud software stack has evolved to be highly software defined around networks, containers and swarms of micro-services. These technologies are now critical tools for many research communities. Life Science, environmental science and urban informatics have been early adopters of cloud and machine learning technologies. This talk presents an overview of the current cloud stack and discusses examples of how science is evolving because of these advances.
Dennis Gannonis Professor Emeritus in the School of Informatics and Computing at Indiana University. From 2008 until
he retired in 2014 Dennis Gannon was with Microsoft Research, most recently as the Director of Cloud Research Strategy. In this
role he helped provide access to Azure cloud computing resources to over 300 projects in the research and education community in
the U.S., Europe, Asia, South America and Australia. His previous roles at Microsoft include directing research as a member of
the Cloud Computing Research Group and the Extreme Computing Group. From 1985 to 2008 Gannon was with the Department of Computer
Science at Indiana University where he was Science Director for the Indiana Pervasive Technology Labs and, for seven years, Chair
of the Department of Computer Science. In 2012 he received the President’s Medal for his service to Indiana University. His
research interests include cloud computing, large-scale cyberinfrastructure, programming systems and tools, distributed computing,
parallel programming, data analytics and machine learning, computational science, problem solving environments and performance
analysis of scalable computer systems. His publications include more than 200 refereed articles and three co-edited books. Gannon
received his PhD in computer science from the University of Illinois Urbana-Champaign and a PhD in mathematics from the University
of California, Davis.
If you have any questions or queries on MS 2015, please send
email to ms AT ServicesSociety.org.
Please join us at IEEE Services Computing Community (http://services.oc.ieee.org/). Press the "Register" button to apply for a FREE IEEE Web Account. As a member, you will be permitted to login and participate in the community. This invitation allows you to join a community designed to facilitate collaboration among a group while minimizing e-mails to your inbox. As a registered member of the Services Computing Community, you can also access the IEEE Body of Knowledge on Services Computing (http://www.servicescomputing.tv).