Medium Level Questions

SQL DELETE

SQL TRUNCATE

The DELETE statement removes rows one at a time and records an entry in the transaction log for each deleted row. TRUNCATE TABLE removes the data by deallocating the data pages used to store the table data and records only the page deallocations in the transaction log.
DELETE command is slower than the identityTRUNCATE command. While the TRUNCATE command is faster than the DELETE command.
To use Delete you need DELETE permission on the table. To use Truncate on a table we need at least ALTER permission on the table.
The identity of the column retains the identity after using DELETE Statement on the table. The identity of the column is reset to its seed value if the table contains an identity column.

Q12. What is a Pivot table? 
Pivot tables are one of the most useful features in Excel. They are used to summarize or aggregate lots of data. The summarization of the data can be in the form of average, count, and other statistical methods.

Data Lake

Data Warehouse

data is not in normalized form. Denormalized schemas
The advances that are utilized in data lakes such as Hadoop, Machine Learning are moderately modern as compared to the information warehouse. Here the technology that’s utilized for a data warehouse is older.
A data lake can have all sorts of information and can be utilized with keeping past, show and prospects in mind. Data Warehouse is concerned, here most of the time is went through on analyzing different sources of the data.
Data in interior of the data lake are profoundly open and can be rapidly updated. Data in interior of the data warehouse are more complicated and it requires more fetched to bring any changes to them, availability is additionally confined as it were authorized users.

Q14. What is Hypothesis Testing 
Hypothesis testing is a statistical method that is used in making a statistical decision using experimental data. Hypothesis testing is basically an assumption that we make about a population parameter. It evaluates two mutually exclusive statements about a population to determine which statement is best supported by the sample data.

Q15. Data Preprocessing in Data Mining
Data preprocessing is an important step in the data mining process. It refers to the cleaning, transforming, and integrating of data in order to make it ready for analysis. The goal of data preprocessing is to improve the quality of the data and to make it more suitable for the specific data mining task.

Q16. What Is Time Series Analysis?
Time series data is a sequence of data points recorded or collected at regular time intervals. It is a type of data that tracks the evolution of a variable over time, such as sales, stock prices, temperature, etc.

Q17. Types of Outliers in Data Mining
An outlier is an object that deviates significantly from the rest of the objects. They can be caused by measurement or execution errors. The analysis of outlier data is referred to as outlier analysis or outlier mining.

Q18. Collaborative Filtering in Machine Learning
In Collaborative Filtering, we tend to find similar users and recommend what similar users like. In this type of recommendation system, we don’t use the features of the item to recommend it, rather we classify the users into clusters of similar types and recommend each user according to the preference of its cluster.

Q19. What are B-trees Data structures? 
The limitations of traditional binary search trees can be frustrating. Meet the B-Tree, the multi-talented data structure that can handle massive amounts of data with ease. When it comes to storing and searching large amounts of data, traditional binary search trees can become impractical due to their poor performance and high memory usage. B-trees, also known as B-Tree or Balanced Tree, are a type of self-balancing tree that was specifically designed to overcome these limitations.

Q20.Detect Cycle in a Directed Graph
To find a cycle in a directed graph we can use the Depth First Traversal (DFS) technique. It is based on the idea that there is a cycle in a graph only if there is a back edge [i.e., a node points to one of its ancestors] present in the graph.

Tiger Analytics Interview Questions and Answers for Technical Profiles

Think globally, and impact millions. That’s the driving force behind Tiger Analytics, a data-driven powerhouse leading the AI and analytics consulting world. Tiger Analytics tackles challenges that resonate across the globe, shaping the lives of millions through innovative data-driven solutions. More than just a company, Tiger Analytics fosters a culture of expertise and respect, where collaboration remains supreme. With headquarters in Silicon Valley and delivery centers scattered across the globe, including India’s bustling hubs of Chennai and Hyderabad, Tiger Analytics offers a dynamic environment catering to both in-person and remote teams.

To know more about Tiger Analytics Recruitment Process please go through this attached link

Table of Content

  • Easy Level Questions.
  • Medium Level Questions
  • Hard Level Questions

Cracking the Tiger Analytics data analyst interview is not an easy task, it requires careful planning and the correct tools. But don’t worry, aspiring data analysts! Sharpen your data storytelling abilities with strategic communication prompts, and impress with your knowledge of the company’s cutting-edge tools and projects. This article contains a treasure of important interview questions that have been frequently asked in data analyst interviews at Tiger Analytics and will turn you into a confident data analyst, so be ready to ace the interview and take your career to the next level!

Similar Reads

Easy Level Questions.

Q1. How to swap two numbers without using a temporary variable?The idea is to get a sum in one of the two given numbers. The numbers can then be swapped using the sum and subtraction from the sum....

Medium Level Questions

Q11. What is the difference between SQL DELETE and SQL TRUNCATE commands?...

Hard Level Questions

Q21. Data Normalization Machine LearningData normalization is a vital pre-processing, mapping, and scaling method that helps forecasting and prediction models become more accurate. The current data range is transformed into a new, standardized range using this method....