Subject description - A4M33BDT

Summary of Study | Summary of Branches | All Subject Groups | All Subjects | List of Roles | Explanatory Notes               Instructions
A4M33BDT Big Data Technologies
Roles:  Extent of teaching:1P+1C
Department:13136 Language of teaching:CS
Guarantors:  Completion:KZ
Lecturers:  Credits:3
Tutors:  Semester:L

Web page:

https://sites.google.com/a/via.felk.cvut.cz/bigdata/

Anotation:

The objective of this elective course is to familiarize students with new trends and technologies for storing, management and processing of Big Data. The course will focus on methods for extraction, analysis as well as a selection of hardware infrastructure for managing persistent and streamed data, such as data from social networks. As part of the course we will present how to apply the traditional methods of artificial intelligence and machine learning to Big Data analysis.

Study targets:

The goal of the course is to show on practical examples to the basic methods for processing Big Data. Examples will focus on the statistical data processing.

Course outlines:

1. Introduction, Big Data processing motivation, requirements
2. Hadoop overview - all components and how they work together
i) Hadoop Common: The common utilities that support the other Hadoop modules.
ii) Hadoop Distributed File System (HDFS?): A distributed file system that provides high-throughput access to application data.
iii) Hadoop YARN: A framework for job scheduling and cluster resource management.
iv) Hadoop MapReduce: A YARN-based system for parallel processing of large data sets.
3. Introduction to MapReduce, how to use pre-installed data. Basic skeleton for running words histogram in Java
4. HDFS, NoSQL databases, HBase, Cassandra, SQL access, Hive,
5. What is Mahout, what are the basic algorithms
6. Streamed data - real time processing
7. Twitter data processing, simple sentiment algorithm

Exercises outline:

1. Cloud computing cluster OpenStack basic commands, virtualization.
2. Install hadoop, hw requirements, sw requirements, how to administer (create access), introduce to the basic setup on our cluster, how to monitor. Run the words histogram, single thread.
3. The bag of words notion, TF-IDF, run SVD, LDA.
4. Manipulation with data, how to upscale-downscale HDFS, How to run and monitor computation progres, how to organize the computation.
5. Run random forest classification task using the Mahout algorithms, show how much faster is the map reduce implementation compared to single thread on one box.
6. Semester work presentation and zápočet

Literature:

Hadoop: The Definitive Guide, 4th Edition, by Tom White

Requirements:

Seminars will be run the standard way. We assume that students will bring their own computers for editing scripts. Calculations will be executed in the computer cluster with remote access. For practical exercises, students will use pre-loaded text database. The seminars will focus on practical application of technology to specific examples. During the semester are scheduled two short tests of subject matter.

Keywords:

Big Data, Hadoop, Machine learning

Subject is included into these academic programs:

Program Branch Role Recommended semester


Page updated 16.4.2024 15:51:27, semester: Z/2024-5, Z,L/2023-4, Send comments about the content to the Administrators of the Academic Programs Proposal and Realization: I. Halaška (K336), J. Novák (K336)