Data Science in Theory and Practice. Maria Cristina Mariani
Чтение книги онлайн.
Читать онлайн книгу Data Science in Theory and Practice - Maria Cristina Mariani страница 13
Data storage: Data for batch processing operations is typically stored in a distributed file store that can hold high volumes of large files in various formats. This kind of store is often called a data lake. A data lake is a storage repository that allows one to store structured and unstructured data at any scale until it is needed.
Batch processing: Since data sets are enormous, often a big data solution must process data files using long‐running batch jobs to filter, aggregate, and otherwise prepare the data for analysis. Normally, these jobs involve reading source files, processing them, and writing the output to new files. Options include running U‐SQL jobs or using Java, Scala, R, or Python programs. U-SQL is a data processing language that merges the benefits of SQL with the expressive power of ones own code.
Real‐time message ingestion: If the solution includes real‐time sources, the architecture must include a way to capture and store real‐time messages for stream processing. This might be a simple data store, where incoming messages are stored into a folder for processing. However, many solutions need a message ingestion store to act as a buffer for messages and to support scale‐out processing, reliable delivery, and other message queuing semantics.
Stream processing: After obtaining real‐time messages, the solution must process them by filtering, aggregating, and preparing the data for analysis. The processed stream data is then written to an output sink.
Analytical data store: Several big data solutions prepare data for analysis and then serve the processed data in a structured format that can be queried using analytical tools. The analytical data store used to serve these queries can be a Kimball‐style relational data warehouse, as observed in most classical business intelligence (BI) solutions. Alternatively, the data could be presented through a low‐latency NoSQL technology, such as HBase, or an interactive Hive database that provides a metadata abstraction over data files in the distributed data store.
Analysis and reporting: The goal of most big data solutions is to provide insights into the data through analysis and reporting. Users can analyze the data using mathematical and statistical models as well using data visualization techniques. Analysis and reporting can also take the form of interactive data exploration by data scientists or data analysts.
Orchestration: Several big data solutions consist of repeated data processing operations, encapsulated in workflows, that transform source data, move data between multiple sources and sinks, load the processed data into an analytical data store, or move the results to a report or dashboard.
2 Matrix Algebra and Random Vectors
2.1 Introduction
The matrix algebra and random vectors presented in this chapter will enable us to precisely state statistical models. We will begin by discussing some basic concepts that will be essential throughout this chapter. For more details on matrix algebra please consult (Axler 2015).
2.2 Some Basics of Matrix Algebra
2.2.1 Vectors
Definition 2.1 (Vector) A vector
is an array of real numbers , and it is written as:Definition 2.2 (Scaler multiplication of vectors) The product of a scalar
, and a vector is the vector obtained by multiplying each entry in the vector by the scalar:Definition 2.3 (Vector addition) The sum of two vectors of the same size is the vector obtained by adding corresponding entries in the vectors:
so that
is the vector with the th element .
2.2.2 Matrices
Definition 2.4 (Matrix) Let
and denote positive integers. An ‐by‐ matrix is a rectangular array of real numbers with rows and columns:The notation
denotes the entry in row , column of . In other words, the first index refers to the row number and the second index refers to the column number.Example 2.1
then