Course description
Big Data Fundamentals
Organizations now have access to massive amounts of data and it’s influencing the way they operate. They are realizing in order to be successful they must leverage their data to make effective business decisions.
In this course, part of the Big Data MicroMasters program, you will learn how big data is driving organisational change and the key challenges organizations face when trying to analyse massive data sets.
You will learn fundamental techniques, such as data mining and stream processing. You will also learn how to design and implement PageRank algorithms using MapReduce, a programming paradigm that allows for massive scalability across hundreds or thousands of servers in a Hadoop cluster. You will learn how big data has improved web search and how online advertising systems work.
By the end of this course, you will have a better understanding of the various applications of big data methods in industry and research.
Upcoming start dates
Who should attend?
Prerequisites
Candidates interested in pursuing the MicroMasters program in Big Data are advised to completeProgramming for Data ScienceandComputational Thinking and Big Databefore undertaking this course.
Training content
The basics of working with big data
Understand the four V’s of Big Data (Volume, Velocity, and Variety); Build models for data; Understand the occurrence of rare events in random data.
Web and social networks
Understand characteristics of the web and social networks; Model social networks; Apply algorithms for community detection in networks.
Clustering big data
Clustering social networks; Apply hierarchical clustering; Apply k-means clustering.
Google web search
Understand the concept of PageRank; Implement the basic; PageRank algorithm for strongly connected graphs; Implement PageRank with taxation for graphs that are not strongly connected.
Parallel and distributed computing using MapReduce
Understand the architecture for massive distributed and parallel computing; Apply MapReduce using Hadoop; Compute PageRank using MapReduce.
Computing similar documents in big data
Measure importance of words in a collection of documents; Measure similarity of sets and documents; Apply local sensitivity hashing to compute similar documents.
Products frequently bought together in stores
Understand the importance of frequent item sets; Design association rules; Implement the A-priori algorithm.
Movie and music recommendations
Understand the differences of recommendation systems; Design content-based recommendation systems; Design collaborative filtering recommendation systems.
Google's AdWordsTM System
Understand the AdWords System; Analyse online algorithms in terms of competitive ratio; Use online matching to solve the AdWords problem.
Mining rapidly arriving data streams
Understand types of queries for data streams; Analyse sampling methods for data streams; Count distinct elements in data streams; Filter data streams.
Course delivery details
This course is offered through University of Adelaide, a partner institute of EdX.
8-10 hours per week
Costs
- Verified Track -$199
- Audit Track - Free
Certification / Credits
What you'll learn
- Knowledge and application of MapReduce
- Understanding the rate of occurrences of events in big data
- How to design algorithms for stream processing and counting of frequent elements in Big Data
- Understand and design PageRank algorithms
- Understand underlying random walk algorithms
Contact this provider
edX
edX For Business helps leading companies upskill their labor forces by making the world’s greatest educational resources available to learners across a wide variety of in-demand fields. edX For Business delivers high-quality corporate eLearning to train and engage your employees...