1.
Your customer has three XML files in HDFS with the following contents. Each XML file contains comments made by users on a specific day. Each comment can have zero or more "likes" from other users. The customer wants you to query this data and load it into the Oracle Database on Exadata. How should you parse this data?
2.
What does the flume sink do in a flume configuration?
3.
Your customer is spending a lot of money on archiving data to comply with government regulations to retain data for 10 years. How should you reduce your customer's archival costs?
4.
What access driver does the Oracle SQL Connector for HDFS use when reading HDFS data by using external tables?
5.
Which command should you use to view the contents of the HDFS directory, /user/oracle/logs?
6.
Your customer receives data in JSON format. Which option should you use to load this data into Hive tables?
7.
Your customer is setting up an external table to provide read access to the Hive table to Oracle Database. What does hdfs:/user/scott/data refer to in the external table definition for the Oracle SQL Connector for HDFS?
8.
Your customer has 10 web servers that generate logs at any given time. The customer would like to consolidate and load this data as it is generated into HDFS on the Big Data Appliance. Which option should the customer use?
9.
The Hadoop NameNode is running on port #3001, the DataNode on port #4001, the KVStore agent on port #5001, and the replication node on port #6001. All the services are running on localhost. What is the valid syntax to create an external table in Hive and query data from the NoSQL Database?
10.
Your customer completed all the Kerberos installation prerequisites when the Big Data Appliance was set up. However, when the customer tries to use Kerberos authentication, it gets an error. Which command did the customer fail to run?