In data visualization, what does the term 'chart junk' refer to?
- Color choices in a chart
- Data outliers in a chart
- Important data points in a chart
- Unnecessary or distracting decorations in a chart
'Chart junk' refers to unnecessary or distracting decorations in a chart that do not enhance understanding and can even mislead the viewer. It includes excessive gridlines, decorations, or embellishments that clutter the visual and divert attention from the actual data.
The _______ is a commonly used statistical method in time series to predict future values based on previously observed values.
- Correlation
- Exponential Smoothing
- Moving Average
- Regression Analysis
The blank is filled with "Exponential Smoothing." Exponential smoothing is a widely used statistical method in time series analysis to predict future values by assigning different weights to past observations, with more recent values receiving higher weights. This technique is particularly useful for forecasting when there is a trend or seasonality in the data.
In the context of big data, how do BI tools like Tableau and Power BI handle data scalability and performance?
- Power BI utilizes in-memory processing, while Tableau relies on traditional disk-based storage for handling big data.
- Tableau and Power BI both lack features for handling big data scalability and performance.
- Tableau and Power BI use techniques like data partitioning and in-memory processing to handle big data scalability and performance.
- Tableau relies on cloud-based solutions, while Power BI focuses on on-premises data storage for scalability.
Both Tableau and Power BI employ strategies like in-memory processing and data partitioning to handle big data scalability and enhance performance. This allows users to analyze and visualize large datasets efficiently.
_______ is a distributed database management system designed for large-scale data.
- Apache Hadoop
- MongoDB
- MySQL
- SQLite
Apache Hadoop is a distributed database management system specifically designed for handling large-scale data across multiple nodes. It is commonly used in big data processing. MongoDB, MySQL, and SQLite are database systems but are not specifically designed for distributed large-scale data.
If you are analyzing real-time social media data, which Big Data technology would you use to process and analyze data streams?
- Apache Flink
- Apache Hadoop
- Apache Kafka
- Apache Spark
Apache Kafka is a distributed streaming platform that is commonly used to handle real-time data streams. It allows for the processing and analysis of data in real-time, making it a suitable choice for analyzing social media data as it is generated.
_______ is a constraint in SQL that ensures unique values are inserted into a column.
- CHECK
- DEFAULT
- PRIMARY KEY
- UNIQUE
The "UNIQUE" constraint in SQL ensures that all values in a column are unique, meaning no two rows can have the same value in that column. It is often used to enforce data integrity and prevent duplicate entries.
For a business process improvement case study, the _______ framework is commonly applied to identify inefficiencies and areas for improvement.
- Agile
- PDCA
- SWOT
- Six Sigma
In business process improvement case studies, the Six Sigma framework is commonly applied to identify inefficiencies, reduce variability, and enhance overall process performance. Six Sigma focuses on data-driven decision-making and process optimization.
What is the output of print(list("123"[::-1])) in Python?
- ['1', '2', '3']
- ['3', '2', '1']
- [1, 2, 3]
- [3, 2, 1]
The output will be a list containing the characters of the string "123" in reverse order. The [::-1] slicing reverses the string, and list() converts it into a list of characters.
What type of database model is SQL based on?
- Hierarchical
- Network
- Object-Oriented
- Relational
SQL is based on the relational database model. It uses tables to organize data and relationships between them, making it a powerful and widely used language for managing relational databases.
If a company needs to process large volumes of unstructured data, which type of DBMS should they consider?
- Hierarchical
- NoSQL
- Object-Oriented
- Relational
In scenarios involving large volumes of unstructured data, a NoSQL database management system (DBMS) is well-suited. NoSQL databases offer flexibility and scalability, making them suitable for handling unstructured data types like documents, graphs, and key-value pairs.