Machine Learning using SQL – Day 6/100

The below article is the intellectual property of Ashish Kohli. This is one such article which actually powers the ability of SQL. Give it a read guys.

Yes, you read that one right! One of the most fundamental machine learning algorithms out there is Linear Regression. In simple words, it is a technique to describe a relationship between a response(a.k.a dependent) variable and one or more explanatory (a.k.a independent) variable(s). After doing some reading into the “math” behind these algorithms, I realized that this can be easily achieved in SQL.

I don’t intend to turn this post into another “Machine Learning 101”. There are plenty of such articles out there that explain what Linear Regression is in a much better way, including some nitty-gritty details like it’s back-end optimization algorithm, i.e. Gradient Descent. I will thus try to keep this article as less “ML”ly as possible. However, I’d recommend this 9 byte-sized (typo intended!) article series that explains all this and some more in a really easy language. Also, for most of this post, I will be referring to the formulas and notations used in the hyperlinked article.

Alright! At this point, I hope that you already know about the concepts of Linear Regression and how Gradient Descent works. And thus you’ll also know that the relationship between response and explanatory variable(s) is explained by the following equation :

The goal of Linear Regression is to find the optimal value of θ (theta) that best describes the relationship between two variables and Gradient Descent is the way to do that. The equation given below summarizes how Gradient Descent arrives at the optimal value of θ(s).

Let’s list down the activities that are needed to be performed to arrive at optimal values of θ:

  1. Start with random values of θ and calculate the value of hypothesis function (Hθ)
  2. Fill in the values of θ and Hθ in the convergence equation to get new values of θ
  3. Keep repeating Step 2 until the values of θ don’t change anymore
  4. These values of θ correspond to the minimum cost (or error) for the Hypothesis function
  5. Fill in the final values of θ in the hypothesis function to get Predicted values of the response variable

Step 1: Declaring & Initializing variables

We need 6 variables, each for a different purpose:

  1. theta0, theta1 to store the current value of θ0, θ1
  2. theta0_t, theta1_t to store temporary values of θ0 & θ1 before updating the original ones
  3. var (short for variability) to check if the updated value of θ is approaching “near” the current value or not
  4. alpha to store learning rate (read this answer at Quora to understand all about learning rate)
DECLARE @theta0 int;
DECLARE @theta1 int;
DECLARE @theta0_t int;
DECLARE @theta1_t int;
DECLARE @var DECIMAL(4,3);
DECLARE @alpha DECIMAL(4,2);
--Initial values
SET @theta0 = 0;
SET @theta1 = 1;
SET @theta0_t = 1;
SET @theta1_t = 0;
SET @alpha = 0.1;
SET @var = 0.01;

Step 2: Calculating values of Hθ and updated values of θ

--Calculating theta0
SELECT
@theta0_t = @theta0 - (SUM(Outp)/(SELECT COUNT(*) FROM base))*@alpha
FROM
(
SELECT
(@theta1*X + @theta0) - Y as Outp
FROM base
);--Calculating theta1
SELECT
@theta1_t = @theta1 - (SUM(Outp)/(SELECT COUNT(*) FROM base))*@alpha
FROM
(
SELECT
((@theta1*X + @theta0) - Y)*X as Outp
FROM base
);

Step 3: Comparing if the updated values of θ are close to original θ or not

--Comparing thetas
IF (@theta0_t BETWEEN @theta0-@var AND @theta0+@var) AND (@theta1_t BETWEEN @theta1-@var AND @theta1+@var)

If the above condition is true, then we stop the process and finalize the values of θ. Otherwise, we keep repeating steps 2 & 3. Thus steps 2 & 3 need to be put inside a loop that runs as long as the updated and current values of θ are different.

WHILE (@theta0_t NOT BETWEEN @theta0-@var AND @theta0+@var) AND (@theta1_t NOT BETWEEN @theta1-@var AND @theta1+@var)
BEGIN
–Calculating theta0
SELECT
@theta0_t = @theta0 – (SUM(Outp)/(SELECT COUNT(*) FROM base))*@alpha
FROM
(
SELECT
(@theta1*X + @theta0) – Y as Outp
FROM base
);

	--Calculating theta1
SELECT
@theta1_t = @theta1 - (SUM(Outp)/(SELECT COUNT(*) FROM base))*@alpha
FROM
(
SELECT
((@theta1*X + @theta0) - Y)*X as Outp
FROM base
);

--Comparing thetas
IF (@theta0_t BETWEEN @theta0-@var AND @theta0+@var) AND (@theta1_t BETWEEN @theta1-@var AND @theta1+@var)
BEGIN
SELECT @theta0 = @theta0_t;
SELECT @theta1 = @theta1_t;
BREAK;
END
ELSE
BEGIN
SELECT @theta0 = @theta0_t;
SELECT @theta1 = @theta1_t;
END
END

The above loop will arrive at optimal values for θ. This is Gradient Descent in all it’s glory!

Step 4: Fill in the final values of θ in the hypothesis function to calculate predictions for the response variable

SELECT X,Y,@theta0+@theta1*X AS H_theta
FROM base

And that’s it! We’ve built a machine learning algorithm in SQL with just a few lines of code!

Practical applications & final thoughts

Despite the onset of technological advancements in the field of Data Science, more often than not, every Data Scientist ends up working with legacy systems. In such cases, if the size of the data is huge, it becomes impractical to fetch it out of a legacy system (like SQL Server) into another environment for data science purposes.

Although I initially began this project as a weekend DIY, I feel this has bigger implications. This can be polished and packaged much better to improve its usability. Things like splitting of data into test & train, turning this into multi-variate linear regression will make this project much more practical. I would also love to hear thoughts of all of you on what can be improved.

Thank you Ashish.

Keep Learning 🙂
The Data Monk

Affine Analytics Interview Questions | Day 17

Company – Affine Analytics
Location – Bangalore
Position – Senior Business Analyst
Experience – 3+ years

Compensation – Best in the industry

Affine Analytics Interview Questions

Affine Analytics Interview Questions



Number of Rounds – 4

I received a call from the Technical HR who scheduled the telephonic round for the next day

Round 1 – Telephonic Round (Mostly SQL and Project)
I was asked to introduce myself and then the discussion went towards my recent project at Mu Sigma. We had a good discussion on Regression Techniques, a bit on statistics.

The project description was followed by few questions on SQL (the answers to these questions are present in various articles on the website, links are at the end of the interview)

1. What is the order of SQL query execution?
2. You have two tables with one column each. The table A has 5 values and all the values are 1 i.e. 1,1,1,1,1 and Table B has 3 values and all the values are 1 i.e. 1,1,1.

How many rows will be there if we do the following
1. Left Join
2. Right Join
3. Inner Join
Answer:
https://thedatamonk.com/day-4-sql-intermediate-questions/

3. A quick guesstimate on number of Iphones sold in India per year

Hint in the below link – https://thedatamonk.com/guesstimate-3-what-are-the-number-of-smartphones-sold-in-india-per-year/

4. What is a RANK() function? How is it different from ROW_NUMBER()?
https://thedatamonk.com/question/affine-analytics-interview-questions-what-is-a-rank-function-how-is-it-different-from-row_number/

5. How to fetch only even rows from a table?

Link to Question 4 and 5 – https://thedatamonk.com/day-11-sql-tricky-questions/
https://thedatamonk.com/question/affine-analytics-interview-questions-how-to-fetch-only-even-rows-from-a-table/

6. What are the measures of Central Tendency
https://thedatamonk.com/day-14-basic-statistics/

The telephonic round went for around 1 hour:-
Introduction – 10 minutes
Project – 30 minutes
Questions – 20 minutes

I was shortlisted for the further rounds.
All together the face-to-face interviews were divided into 3 rounds
Round 1 – SQL and R/Python
Round 2 – Statistics
Round 3 – Case Study and HR questions

Round 1
There were ~20 questions on SQL and some questions on Language.
Below are the questions which I remember:-
1. Optimising a SQL code
2. Doing a sum on a column with Null values
Hint – Use Coalesce
3. How to find the count of duplicate rows?
https://thedatamonk.com/question/affine-analytics-interview-questions-how-to-get-3-min-salaries/
4. Use of Lag function
Link – https://thedatamonk.com/day-5-sql-advance-concepts/
5. Life cycle of a project
6. How to find the second minimum salary?
7. How to get 3 Min salaries?
8. DDL, DML, and DCL commands
https://thedatamonk.com/day-13-sql-theoretical-questions/

There were few more questions on Joins and optimising inner query codes.
Overall difficulty level- 8/10

There were 5 questions on Python/R –

Loading a csv/text file
Writing code of Linear Regression (As it was mentioned on my resume)
Doing a right join in either of the language
Removing null value from a column
https://thedatamonk.com/question/affine-analytics-interview-questions-removing-null-value-from-a-column/

Round 3 – Statistics

How to calculate IQR?
What is positive skewness and negative skewness?
https://thedatamonk.com/question/affine-analytics-interview-questions-what-is-positive-skewness-and-negative-skewness/
What are the two types of regression?
What is multiple linear regression?
What is Logistic Regression?
What is p-value and give an example?

These questions were discussed in detail and I power the explanation with real life examples.

https://thedatamonk.com/day-18-statistics-interview-questions/

Bonus tips – Do look for good examples

Round 4 – Case Study and HR Questions

How many laptops are sold in Bangalore in a Day ?
https://thedatamonk.com/guesstimate-2-how-many-laptops-are-sold-in-bangalore-in-a-day/

Business Case Study – There is a mobile company which is very popular in Other Asian countries. The company is planning to open it’s branch in the most popular mall of Bangalore.
What should be the strategy of the company?
How can you use freely available data to plan the marketing of the campaigns?
How can you use Digital marketing to create campaigns for the company?

https://thedatamonk.com/question/affine-analytics-interview-questions-business-case-study/

These questions were followed by:-
Why do you want to change the company?
How is the work in your current organisation?

I got the confirmation in 2 working days.

This was it 

Amazon Interview Questions
Sapient Interview Questions


Full interview question of these round is present in our book What do they ask in Top Data Science Interview Part 2: Amazon, Accenture, Sapient, Deloitte, and BookMyShow 

You can get your hand on our ebooks, we also have a 10 e-book bundle offer at Rs.549 where you get a total of 1400 questions.
Comment below or mail at contact@thedatamonk.com for more information

1. The Monk who knew Linear Regression (Python): Understand, Learn and Crack Data Science Interview
2. 100 Python Questions to crack Data Science/Analyst Interview
3. Complete Linear Regression and ARIMA Forecasting project using R
4. 100 Hadoop Questions to crack data science interview: Hadoop Cheat Sheet
5. 100 Questions to Crack Data Science Interview
6. 100 Puzzles and Case Studies To Crack Data Science Interview
7. 100 Questions To Crack Big Data Interview
8. 100 Questions to Learn R in 6 Hours
9. Complete Analytical Project before Data Science interview
10. 112 Questions To Crack Business Analyst Interview Using SQL
11. 100 Questions To Crack Business Analyst Interview
12. A to Z of Machine Learning in 6 hours
13. In 2 Hours Create your first Azure ML in 23 Steps
14. How to Start A Career in Business Analysis
15. Web Analytics – The Way we do it
16. Write better SQL queries + SQL Interview Questions
17. How To Start a Career in Data Science
18. Top Interview Questions And All About Adobe Analytics
19. Business Analyst and MBA Aspirant’s Complete Guide to Case Study – Case Study Cheatsheet
20. 125 Must have Python questions before Data Science interview
21. 100 Questions To Understand Natural Language Processing in Python
22. 100 Questions to master forecasting in R: Learn Linear Regression, ARIMA, and ARIMAX
23. What do they ask in Top Data Science Interviews
24. What do they ask in Top Data Science Interviews: Part 1

Keep Learning 



10 Questions, 10 Minutes – 5/100

1. What if you want to toggle case for a Python string?

We have the swapcase() method from the str class to do just that. 1. >>> ‘AyuShi’.swapcase()

‘aYUsHI’

2. Write code to print only upto the letter t.

I love Python

>>> i=0
>>> while s[i]!=’t’:
print(s[i],end=’’) 4. i+=1
I love Py

3. What is recursion?

Whenafunctionmakesacalltoitself,itistermed recursion. Butthen,in order for it to avoid forming an infinite loop, we must have a base condition.

Let’s take an example.

>>> def facto(n): 
if n==1: return 1
return n*facto(n-1)
>>> facto(4)

4. What is a function?

When we want to execute a sequence of statements, we can give it a name. Let’s define a function to take two numbers and return the greater number.

>>> def greater(a,b): 
return a is a>b else b

5. Explain Python List Comprehension.

The list comprehension in python is a way to declare a list in one line of code. Let’s take a look at one such example.

>>> [i for i in range(1,11,2)] [1, 3, 5, 7, 9] 
>>> [i*2 for i in range(1,11,2)] [2, 6, 10, 14, 18]

6. How do you get all values from a Python dictionary?

We saw previously, to get all keys from a dictionary, we make a call to the keys() method. Similarly, for values, we use the method values().

 >>> 'd' in {'a':1,'b':2,'c':3,'d':4}.values()  
False
 >>> 4 in {'a':1,'b':2,'c':3,'d':4}.values()  
True

7. What is the difference between remove() function and del statement?

You can use the remove() function to delete a specific object in the list.

If you want to delete an object at a specific location (index) in the list, you can either use del or pop.

Note: You don’t need to import any extra module to use these functions for removing an element from the list.

We cannot use these methods with a tuple because the tuple is different from the list.


8. How to remove leading whitespaces from a string in the Python?

To remove leading characters from a string, we can use lstrip() function. It is Python string function which takes an optional char type parameter. If a parameter is provided, it removes the character. Otherwise, it removes all the leading spaces from the string.

string = "  javatpoint "   
string2 = "    javatpoint        "  
print(string)  
print(string2)  
print("After stripping all leading whitespaces:")  
print(string.lstrip())  
print(string2.lstrip())  


9. Why do we use join() function in Python?

A. The join() is defined as a string method which returns a string value. It is concatenated with the elements of an iterable. It provides a flexible way to concatenate the strings. See an example below.

str = "Rohan"  
str2 = "ab"  
# Calling function    
str2 = str.join(str2)    
# Displaying result    
print(str2)  
Output:
aRohanb


10. What are the rules for a local and global variable in Python?

A. In Python, variables that are only referenced inside a function are called implicitly global. If a variable is assigned a new value anywhere within the function’s body, it’s assumed to be a local. If a variable is ever assigned a new value inside the function, the variable is implicitly local, and we need to declare it as ‘global’ explicitly. To make a variable globally, we need to declare it by using global keyword. Local variables are accessible within local body only. Global variables are accessible anywhere in the program, and any function can access and modify its value.

10 Questions, 10 Minutes – 4/100

1.How would you convert a string into an int in Python?

If a string contains only numerical characters, you can convert it into an integer using the int() function.

>>> int(‘227’) 227

Let’s check the types: 1. >>> type(‘227’)

<class ‘str’>
1. >>> type(int(‘227’))

<class ‘int’>

2.What is difference between unique and distinct?(90% asked Advanced SQL Interview Questions )

There is no difference between unique and distinct keywords apart from one difference.unique is applied before insertion and retrival.It consists of non duplicate values.if unique constraint is given it does not take duplicate values.distinct is used in retrieval it gives the suppressed row(ex if two rows are same it will show single row and non duplicate row) therefore distinct is the combination of suppressed duplicate and non duplicate rows.Specify DISTINCT or UNIQUE if you want Oracle to return only one copy of each set of duplicate rows selected (these two keywords are synonymous). Duplicate rows are those with matching values for each expression in the select list. 


3.What will be the output of following Query?

Query :
select case when null=null then ‘Amit’ Else ‘Pradnya’ from Table_Name;

In SQL null value is not equal to itself.So null=null is false and the output of above query is ‘Pradnya’.

4. Which are different Set operators in SQL?(100% asked Advanced SQL Interview Questions )

Set operators are nothing but the operators which are used to connect two tables and fetch the records from the two tables.We need to follow one condition that the table set 1 columns and table set 2 columns are same and its datatype must be same.SQL Set Operators combines the result of 2 queries or components on to the single result.

Following are Set Operators in SQL:

1. Union

2. Unionall
3. Intersect
4. Minus

5. How to select first 5 characters from First name in Employee table?

Oracle Query:
Select Substr(First_name,0,5) from Employee;

MS SQL:
Select Substr(First_name,1,5) from Employee;

MySQL:
Select Substr(First_name,1,5) from Employee;

6. What will be the output of following query? Query :Select * from (select ‘a’ union all select ‘b’) Q;

It will throw error because no values are selected in Subquery.

7. Explain co-related sub-query with example.

Fetch the Employees who have not assigned a single department.

Select * from Employee E where Not exist

(Select Department_no From Department D where E.Employee_id=D.Employee_ID);

Execution of query:

Step 1:

Select * from Employee E ;

It will fetch the all employees

Step 2:

The First Record of the Employee second query is executed and output is given to first query.

(Select Department_no From Department D where E.Employee_id=D.Employee_ID);

Step 3:
Step 2 is repeated until and unless all output is been fetched.

8. What is difference between NVL,NVL2 and Nullif?

1.NVL :

NVL function substitutes a value when a null value is encountered.

2.NVL2 :

NVL2 substitutes a value when a null value is encountered as well as when a non-null value is encountered.

3.NULLIF:

NULLIF function compares expr1 and expr2. If expr1 and expr2 are equal, the NULLIF function returns NULL. Otherwise, it returns expr1.

9. What is Index?What is use of index in SQL?

Index is optional structure associated with the table which may or may not improve the performance of Query.In simple words suppose we want to search the topic in to book we go to index page of that book and search the topic which we want.Just like that to search the values from the table when indexing is there you need not use the full table scan.

Indexes are used to improve the performance of the query.

10. What is the difference between Having and Where clause?

Where clause is used to fetch data from a database that specifies particular criteria whereas a Having clause is used along with ‘GROUP BY’ to fetch data that meets particular criteria specified by the Aggregate functions. Where clause cannot be used with Aggregate functions, but the Having clause can.

10 Questions, 10 Minutes – SQL/R/Python – 3/100

1.What will the following code output?

>>> word=’abcdefghij’ 
>>> word[:3]+word[3:]

The output is ‘abcdefghij’. The first slice gives us ‘abc’, the next gives us ‘defghij’.

2.How will you convert a list into a string?

We will use the join() method for this.


>>> nums=['one','two','three','four','five','six','seven']
>>> s=' '.join(nums)
>>> s

‘one two three four five six seven’

3. How will you remove a duplicate element from a list?

We can turn it into a set to do that.

>>> list=[1,2,1,3,4,2] 
>>> set(list) 
{1, 2, 3, 4} 

4. Explain the //, %, and ** operators in Python.

The // operator performs floor division. It will return the integer part of the result on division.

>>> 7//2 
3

Normal division would return 3.5 here.

Similarly, ** performs exponentiation. a**b returns the value of a raised to the power b.

>>> 2**10 
1024

Finally, % is for modulus. This gives us the value left after the highest achievable division.

>>> 13%7 
6

5. Explain identity operators in Python.

The operators ‘is’ and ‘is not’ tell us if two values have the same identity. 1.

>>> 10 is '10' 
False
 >>> True is not False 
True 

6. What are numbers?

Python provides us with five kinds of data types:

Numbers – Numbers use to hold numerical values.

>>> a=7.0 

7. What are Strings?

A string is a sequence of characters. We declare it using single or double quotes.

>>> title="Ayushi's Book" 


8. What are Lists?

Lists – A list is an ordered collection of values, and we declare it using square brackets.

>>> colors=['red','green','blue'] 
>>> type(colors)

<class ‘list’>



9. What are Tuples?

Tuples – A tuple, like a list, is an ordered collection of values. The difference. However, is that a tuple is immutable. This means that we cannot change a value in it.

>>> name=(‘Ayushi’,’Sharma’) 


>>> name[0]=’Avery’ 


Traceback (most recent call last):

File “<pyshell#129>”, line 1, in <module>

name[0]=’Avery’

TypeError: ‘tuple’ object does not support item assignment


10. What are Disctionary?

Dictionary – A dictionary is a data structure that holds key-value pairs. We declare it using curly braces.

>>> squares={1:1,2:4,3:9,4:16,5:25} 
>>> type(squares)

<class ‘dict’>
1. >>> type({})

<class ‘dict’>
We can also use a dictionary comprehension:

>>> squares={x:x**2 for x in range(1,6)} 
>>> squares {1: 1, 2: 4, 3: 9, 4: 16, 5: 25}


10 Questions, 10 Minutes – 2/100

This is something which has been on my mind since a long time. We will be picking 10 questions per day and would like to simplify it.
We will make sure that the complete article is covered in 10 minutes by the reader. There will be 100 posts in the coming 3 months.

The articles/questions will revolve around SQL, Statistics, Python/R, MS Excel, Statistical Modelling, and case studies.

The questions will be a mix of these topics to help you prepare for interviews

You can also contribute by framing 10 questions and sending it to contact@thedatamonk.com or messaging me on Linkedin.

The questions will be updated late in the night ~1-2 a.m. and will be posted on Linkedin as well.

Let’s see how many can we solve in the next 100 posts

1/100 – SQL Questions

1. How to find the minimum salary using subquery?
-SELECT *
FROM employee
WHERE salary = (select MIN(salary) from employee);

2. How to find the second minimum salary?
– SELECT *
FROM employee
WHERE salary = (SELECT MIN(salary) FROM employee > SELECT MIN(salary) FROM employee)

Similarly, find the third minimum salary

– SELECT *
FROM employee
WHERE salary = (SELECT MIN(salary) FROM employee > SELECT MIN(salary) FROM employee > SELECT MIN(salary) FROM employee)

3. The above query is too lengthy, write a query to get the third minimum salary with some other method.

– SELECT DISTINCT (salary)
FROM emp e1 where 3 = (SELECT COUNT(DISTINCT salary) FROM emp e2 WHERE e1.sal >= e2.sal);

4. How to get 3 Min salaries?
-SELECT DISTINCT salary FROM emp a WHERE 3 >= (SELECT COUNT(DISTINCT salary) FROM emp b WHERE a.salary >= b.salary);

5. Some basic SQL Select questions
– SELECT 125
125
-SELECT ‘Ankit’+’1’
Ankit1
-SELECT ‘Ankit’+1
Error
– SELECT ‘2’+2
4
-SELECT SUM(‘1’)
1

6. Write a generic method to fetch the nth highest salary without TOP or Limit

SELECT Salary
FROM Worker W1
WHERE n-1 = (
 SELECT COUNT( DISTINCT ( W2.Salary ) )
 FROM Worker W2
 WHERE W2.Salary >= W1.Salary
 );

7. LAG(): Provides access to a row at a given physical offset that comes before the current row. Use this function in a SELECT statement to compare values in the current row with values in a previous row as
specified by offset. Default offset is 1 if not specified. If Partition By clause is specified then it returns the offset Value in each partition after ordering the partition by Order By Clause.

Basically, lag() is used to create one more column in the table where you can get the previous value of the specified column

Col1Col2Lag_Col
a10Null
b2010
c3020
d4030

8.One more example

employee_numberlast_namefirst_namesalarydept_id
12009SutherlandBarbara5400045
34974YatesFred8000045
34987EricksonNeil4200045
45001ParkerSally5750030
75623GatesSteve6500030
SELECT dept_id, last_name, salary,
LAG (salary,1) OVER (ORDER BY salary) AS lower_salary
FROM employees;
dept_idlast_namesalarylower_salary
45Erickson42000NULL
45Sutherland5400042000
30Parker5750054000
30Gates6500057500
45Yates8000065000

9. LEAD() – Provides access to a row at a given physical offset that comes after the current row. Use this function in a SELECT statement to compare values in the current row with values in a subsequent row
as specified by offset. Default offset is 1 if not specified. If Partition By clause is specified then it returns the offset Value in each partition after ordering the partition by Order By Clause

10. Which operator is used for Pattern Matching?

LIKE operator is used for pattern matching. It supports below wildcards.
 % : Matches any string of zero or more characters.
 _ : Matches any single character.
 [] : Matches any single character within the specified range ([a-f]) or set ([abcdef]).
 [^] : Matches any single character not within the specified range ([^a-f]) or set ([^abcdef])


This was the second set of 10 questions, if you want to learn more about the type of questions asked in different Data Science interviews then do try the below book:-

 What do they ask in top Data Science Interviews: 5 Complete Data Science Real Interviews Q and A

 What do they ask in Top Data Science Interview Part 2: Amazon, Accenture, Sapient, Deloitte, and BookMyShow

Keep Learning 🙂

The Data Monk

10 Questions, 10 Minutes – 1/100

This is something which has been on my mind since a long time. We will be picking 10 questions per day and would like to simplify it.
We will make sure that the complete article is covered in 10 minutes by the reader. There will be 100 posts in the coming 3 months.

The articles/questions will revolve around SQL, Statistics, Python/R, MS Excel, Statistical Modelling, and case studies.

The questions will be a mix of these topics to help you prepare for interviews

You can also contribute by framing 10 questions and sending it to contact@thedatamonk.com or messaging me on Linkedin.

The questions will be updated late in the night ~1-2 a.m. and will be posted on Linkedin as well.

Let’s see how many can we solve in the next 100 posts

1. Write the syntax to create a new column using Row Number over the Salary column

SELECT *, ROW_NUMBER() OVER (Order By Salary) as Row_Num
FROM Employee

Output

Emp. IDNameSalaryRow_Num
232Rakshit300001
543Rahul300002
124Aman400003
123Amit500004
453Sumit500005

2. What is PARTITION BY clause?
PARTITION BY clause is used to create a partition of ranking in a table. If you partition by Salary in the above table, then it will provide a ranking based on each unique salary. Example below:-

SELECT *, ROW_NUMBER() OVER (PARTITION BY Salary ORDER BY Salary) as Row_Num

Emp. IDNameSalaryRow_Num
232Rakshit300001
543Rahul300002
124Aman400001
123Amit500001
453Sumit500002

3. What is a RANK() function? How is it different from ROW_NUMBER()?
– RANK() function gives ranking to a row based on the value on which you want to base your ranking. If there are equal values, then the rank will be repeated and the row following the repeated values will skip as many ranks as there are repeated values row. Confused?? Try out the example below:-

SELECT *, RANK() OVER (ORDER BY Salary) as Row_Num
FROM Employee

Output

Emp. IDNameSalaryRow_Num
232Rakshit300001
543Rahul300001
124Aman400003
123Amit500004
453Sumit500004

As you can see, the rank 2 has been skipped because there were two employees with the same Salary and the result is ordered in ascending order by default.

4. What is Dense Ranking?
– DENSE_RANK() is similar to the RANK() function but it does not skip any rank, so if there are two equal values then both will be termed as 1, the third value will be termed as 3 and not 2.

Syntax:-
SELECT *, DENSE_RANK() OVER (PARTITION BY Salary ORDER BY Salary) as Row_Num
FROM Employee

Output:-

Emp. IDNameSalaryRow_Num
232Rakshit300001
543Rahul300001
124Aman400003
123Amit500004
453Sumit500004
432Nihar600006

5. What is NTILE() function?
-NTILE() is similar to percentile NTILE(3) will divide the data in 3 parts.

SELECT *, NTILE() OVER (ORDER BY Salary) as Ntile
FROM Employee

The number of rows should be 6/3 = 2, therefore we need to divide the 2 rows for each percentile

Emp. IDNameSalaryNtile
232Rakshit300001
543Rahul300001
124Aman400002
123Amit500002
453Sumit500003
432Nihar600003

6. How to get the second highest salary from a table?
Select MAX(Salary)
from Employee
Where Salary NOT IN (SELECT MAX(Salary) from Employee)

7. Find the 3rd Maximum salary in the employee table
-Select distinct sal
from emp e1
where 3 = ((select count(distinct sal) from emp e2 where e1.sal <= e2.sal);

8. Get all employee detail from EmployeeDetail table whose “FirstName” not start with any single character between ‘a-p’
– SELECT *
FROM EmployeeDetail
WHERE FirstName like ‘[^a-p]%’

9. How to fetch only even rows from a table?
-The best way to do it is by adding a row number using ROW_NUMBER() and then pulling the alternate row number using row_num%2 = 0

Suppose, there are 3 columns in a table i.e. student_ID, student_Name, student_Grade. Pull the even rows

SELECT *
FROM ( SELECT *, ROW_NUMBER() OVER (ORDER BY student_ID) as row_num FROM student) x
WHERE x.row_num%2=0

10. How to fetch only odd rows from the same table?
-Simply apply the x.row_num%2 <> 0 to get the odd rows

SELECT *
FROM ( SELECT *, ROW_NUMBER() OVER (ORDER BY student_ID) as row_num FROM student) x
WHERE x.row_num%2 <> 0


Let us know if you think I need to change any answer here.

Keep Learning 🙂

The Data Monk

Top 100 Power BI Interview Questions – Part 1/2

Q1. What are the parts of Microsoft self-service business intelligence solution?

Microsoft has two parts for Self-Service BI

Excel BI Toolkit It Allows users to create an interactive report by importing data from different sources and model data according to report requirement.
Power BI It is The online solution that enables you to share the interactive reports and queries that you have created using the Excel BI Toolkit.

Q2. What is self-service business intelligence?

Self-Service Business Intelligence (SSBI)

  • SSBI is an approach to data analytics that enables business users to filter, segment, and, analyze their data, without the in-depth technical knowledge in statistical analysis, business intelligence (BI).
  • SSBI has made it easier for end users to access their data and create various visuals to get better business insights.
  • Anybody who has a basic understanding of the data can create reports to build intuitive and shareable dashboards.

Q3.  What is Power BI?

Power BI is a cloud-based data sharing environment. Once you have developed reports using Power Query, Power Pivot and Power View, you can share your insights with your colleagues. This is where Power BI enters the equation. Power BI, which technically is an aspect of SharePoint online, lets you load Excel workbooks into the cloud and share them with a chosen group of co-workers. Not only that, but your colleagues can interact with your reports to apply filters and slicers to highlight data. They are completed by Power BI, a simple way of sharing your analysis and insights from the Microsoft cloud.

Power BI features allow you to:

  • Share presentations and queries with your colleagues.
  • Update your Excel file from data sources that can be on-site or in the cloud.
  • Display the output on multiple devices. This includes PCs, tablets, and HTML 5-enabled mobile devices that use the Power BI app.
  • Query your data using natural language processing (or Q&A, as it is known).

Q4. What is Power BI Desktop?

Power BI Desktop is a free desktop application that can be installed right on your own computer. Power BI Desktop works cohesively with the Power BI service by providing advanced data exploration, shaping, modeling, and creating report with highly interactive visualizations. You can save your work to a file or publish your data and reports right to your Power BI site to share with others.

Q5. What data sources can Power BI connect to?

The list of data sources for Power BI is extensive, but it can be grouped into the following:

  • Files: Data can be imported from Excel (.xlsx, xlxm), Power BI Desktop files (.pbix) and Comma Separated Value (.csv).
  • Content Packs: It is a collection of related documents or files that are stored as a group. In Power BI, there are two types of content packs, firstly those from services providers like Google Analytics, Marketo or Salesforce and secondly those created and shared by other users in your organization.
  • Connectors to databases and other datasets such as Azure SQL, Database and SQL, Server Analysis Services tabular data, etc.

Q6. What are Building Blocks in Power BI?

The following are the Building Blocks (or) key components of Power BI:

  1. Visualizations: Visualization is a visual representation of data.
    Example: Pie Chart, Line Graph, Side by Side Bar Charts, Graphical Presentation of the source data on top of Geographical Map, Tree Map, etc.
  2. Datasets: Dataset is a collection of data that Power BI uses to create its visualizations.
    Example: Excel sheets, Oracle or SQL server tables.
  3. Reports: Report is a collection of visualizations that appear together on one or more pages.
    Example: Sales by Country, State, City Report, Logistic Performance report, Profit by Products report etc.
  4. Dashboards: Dashboard is single layer presentation of multiple visualizations, i.e we can integrate one or more visualizations into one page layer.
    Example: Sales dashboard can have pie charts, geographical maps and bar charts.
  5. Tiles: Tile is a single visualization in a report or on a dashboard.
    Example: Pie Chart in Dashboard or Report.

Q7. What are the different types of filters in Power BI Reports?

Power BI provides variety of option to filter report, data and visualization. The following are the list of Filter types.

  • Visual-level Filters: These filters work on only an individual visualization, reducing the amount of data that the visualization can see. Moreover, visual-level filters can filter both data and calculations.
  • Page-level Filters: These filters work at the report-page level. Different pages in the same report can have different page-level filters.
  • Report-level Filters: There filters work on the entire report, filtering all pages and visualizations included in the report.

We know that Power BI visual have interactions feature, which makes filtering a report a breeze. Visual interactions are useful, but they come with some limitations:

  • The filter is not saved as part of the report. Whenever you open a report, you can begin to play with visual filters but there is no way to store the filter in the saved report.
  • The filter is always visible. Sometimes you want a filter for the entire report, but you do not want any visual indication of the filter being applied.


Q8. What are content packs in Power BI?

Content packs for services are pre-built solutions for popular services as part of the Power BI experience. A subscriber to a supported service, can quickly connect to their account from Power BI to see their data through live dashboards and interactive reports that have been pre-built for them. Microsoft has released content packs for popular services such as Salesforce.com, Marketo, Adobe Analytics, Azure Mobile Engagement, CircuitID, comScore Digital Analytix, Quickbooks Online, SQL Sentry and tyGraph. Organizational content packs provide users, BI professionals, and system integrator the tools to build their own content packs to share purpose-built dashboards, reports, and datasets within their organization.

 


Q9. What is DAX?

To do basic calculation and data analysis on data in power pivot, we use Data Analysis Expression (DAX). It is formula language used to compute calculated column and calculated field.

  • DAX works on column values.
  • DAX can not modify or insert data.
  • We can create calculated column and measures with DAX  but we can not calculate rows using DAX.

Sample DAX formula syntax:

For the measure named Total Sales, calculate (=) the SUM of values in the [SalesAmount] column in the Sales table.

A- Measure Name

B- = – indicate beginning of formula

C- DAX Function

D- Parenthesis for Sum Function

E- Referenced Table

F- Referenced column name

Q9. What are some of the DAX functions?

Below are some of the most commonly used DAX function:

  • SUM, MIN, MAX, AVG, COUNTROWS, DISTINCTCOUNT
  • IF, AND, OR, SWITCH
  • ISBLANK, ISFILTERED, ISCROSSFILTERED
  • VALUES, ALL, FILTER, CALCULATE,
  • UNION, INTERSECT, EXCEPT, NATURALINNERJOIN, NATURALLEFTEROUTERJOIN,
    SUMMARIZECOLUMNS, ISEMPTY,
  • VAR (Variables)
  • GEOMEAN, MEDIAN, DATEDIFF

Q11. How is the FILTER function used?

The FILTER function returns a table with a filter condition applied for each of its source table rows. The FILTER function is rarely used in isolation, it’s generally used as a parameter to other functions such as CALCULATE.

  • FILTER is an iterator and thus can negatively impact performance over large source tables.
  • Complex filtering logic can be applied such as referencing a measure in a filter expression.
    • FILTER(MyTable,[SalesMetric] > 500)


Q12. What are the functions and limitations of DAX?

These are the only functions that allow you modify filter context of measures or tables.

  • Add to existing filter context of queries.
  • Override filter context from queries.
  • Remove existing filter context from queries.

Limitations:

  • Filter parameters can only operate on a single column at a time.
  • Filter parameters cannot reference a metric.

Q9. What is SUMMARIZE() and SUMMARIZECOLUMNS() DAX?

SUMMARIZE()

  • Main group by function in SSAS.
  • Recommended practice is to specify table and group by columns but not metrics.You can use ADDCOLUMNS function.

 SUMMARIZECOLUMNS

  • New group by function for SSAS and Power BI Desktop; more efficient.
  • Specify group by columns, table, and expressions.


Q14. What are some benefits of using Variables in DAX ?

Below are some of the benefits:

  • By declaring and evaluating a variable, the variable can be reused multiple times in a DAX expression, thus avoiding additional queries of the source database.
  • Variables can make DAX expressions more intuitive/logical to interpret.
  • Variables are only scoped to their measure or query, they cannot be shared among measures, queries or be defined at the model level.

 

Q15. How would you create trailing X month metrics via DAX against a non-standard calendar?

The  solution will involve:

  1. CALCULATE function to control (take over) filter context of measures.
  2. ALL to remove existing filters on the date dimension.
  3. FILTER to identify which rows of the date dimension to use.

Alternatively, CONTAINS may be used:

  • CALCULATE(FILTER(ALL(‘DATE’),…….))


Q16. What are the different BI add-in to Excel ?

Below are the most important BI add-in to Excel:

  • Power Query: It helps in finding, editing and loading external data.
  • Power Pivot: Its mainly used for data modeling and analysis.
  • Power View: It is used to design visual and interactively reports.
  • Power Map: It helps to display insights on 3D Map.

Q17. What is Power Pivot?

Power Pivot is an add-in for Microsoft Excel 2010 that enables you to import millions of rows of data from multiple data sources into a single Excel workbook. It lets you create relationships between heterogeneous data, create calculated columns and measures using formulas, build PivotTables and PivotCharts. You can then further analyze the data so that you can make timely business decisions without requiring IT assistance.

Q18. What is Power Pivot Data Model?

It is a model that is made up of data types, tables, columns, and table relations. These data tables are typically constructed for holding data for a business entity.


Q19. What is xVelocity in-memory analytics engine used in Power Pivot?

The main engine behind power pivot is the xVelocity in-memory analytics engine. It can handle large amount of data because it stores data in columnar databases, and in memory analytics which results in faster processing of data as it loads all data to RAM memory.

Q20. What are some of differences in data modeling between Power BI Desktop and Power Pivot for Excel?

Here are some of the differences:

  • Power BI Desktop supports bi-directional cross filtering relationships, security, calculated tables, and Direct Query options.
  • Power Pivot for Excel has single direction (one to many) relationships, calculated columns only, and supports import mode only. Security roles cannot be defined in Power Pivot for Excel.

Q21. Can we have more than one active relationship between two tables in data model of power pivot?

No, we cannot have more than one active relationship between two tables. However, can have more than one relationship between two tables but there will be only one active relationship and many inactive relationship. The dotted lines are inactive and continuous line are active.

Q22. What is Power Query?

Power query is a ETL Tool used to shape, clean and transform data using intuitive interfaces without having to use coding. It helps the user to:

  • Import Data from wide range of sources from files, databases, big data, social media data, etc.
  • Join and append data from multiple data sources.
    • Shape data as per requirement by removing and adding data.

 

 

Q23. What are the data destinations for Power Queries?

There are two destinations for output we get from power query:

  • Load to a table in a worksheet.
  • Load to the Excel Data Model.

 

 

Q24. What is query folding in Power Query?

Query folding is when steps defined in Power Query/Query Editor are translated into SQL and executed by the source database rather than the client machine. It’s important for processing performance and scalability, given limited resources on the client machine.

 

 


Q25. What are some common Power Query/Editor Transforms?

Changing Data Types, Filtering Rows, Choosing/Removing Columns, Grouping, Splitting a column into multiple columns, Adding new Columns ,etc.

Q26. Can SQL and Power Query/Query Editor be used together?

Yes, a SQL statement can be defined as the source of a Power Query/M function for additional processing/logic. This would be a good practice to ensure that an efficient database query is passed to the source and avoid unnecessary processing and complexity by the client machine and M function.

Q28. What are query parameters and Power BI templates?

Query parameters can be used to provide users of a local Power BI Desktop report with a prompt, to specify the values they’re interested in.

  • The parameter selection can then be used by the query and calculations.
  • PBIX files can be exported as Templates (PBIT files).
  • Templates contain everything in the PBIX except the data itself.

Parameters and templates can make it possible to share/email smaller template files and limit the amount of data loaded into the local PBIX files, improving processing time and experience.

Q29. Which language is used in Power Query?

A new programming language is used in power query called M-Code. It is easy to use and similar to other languages. M-code is case sensitive language.

Q30. Why do we need Power Query when Power Pivot can import data from mostly used sources?

Power Query is a self-service ETL (Extract, Transform, Load) tool which runs as an Excel add-in. It allows users to pull data from various sources, manipulate said data into a form that suits their needs and load it into Excel. It is most optimum to use Power Query over Power Pivot as it lets you not only load the data but also manipulate it as per the users needs while loading.

 

Q31. What is Power Map?

Power Map is an Excel add-in that provides you with a powerful set of tools to help you visualize and gain insight into large sets of data that have a geo-coded component. It can help you produce 3D visualizations by plotting upto a million data points in the form of column, heat, and bubble maps on top of a Bing map. If the data is time stamped, it can also produce interactive views that display, how the data changes over space and time.

Q32. What are the primary requirement for a table to be used in Power Map?

For a data to be consumed in power map there should be location data like:

  • Latitude/Longitude pair
  • Street, City, Country/Region, Zip Code/Postal Code, and State/Province, which can be geolocated by Bing

The primary requirement for the table is that it contains unique rows. It must also contain location data, which can be in the form of a Latitude/Longitude pair, although this is not a requirement. You can use address fields instead, such as Street, City, Country/Region, Zip Code/Postal Code, and State/Province, which can be geolocated by Bing.

Q33. What are the data sources for Power Map?

The data can either be present in Excel or could be present externally. To prepare your data, make sure all of the data is in Excel table format, where each row represents a unique record. Your column headings or row headings should contain text instead of actual data, so that Power Map will interpret it correctly when it plots the geographic coordinates. Using meaningful labels also makes value and category fields available to you when you design your tour in the Power Map Tour Editor pane.

To use a table structure which more accurately represents time and geography inside Power Map, include all of the data in the table rows and use descriptive text labels in the column headings, like this:

Example of correct table format - Power BI Interview Questions -Edureka

In case you wish to load your data from an external source:

  1. In Excel, click Data > the connection you want in the Get External Data group.
  2. Follow the steps in the wizard that starts.
  3. On the last step of the wizard, make sure Add this data to the Data Model is checked.

 


Q34. What is Power View?

Ans: Power View is a data visualization technology that lets you create interactive charts, graphs, maps, and other visuals which bring your data to life. Power View is available in Excel, SharePoint, SQL Server, and Power BI.

The following pages provide details about different visualizations available in Power View:

  • Charts
  • Line charts
  • Pie charts
  • Maps
  • Tiles
  • Cards
  • Images
  • Tables
  • Power View
  • Multiples Visualizations
  • Bubble and scatter charts
  • Key performance indicators (KPIs)

Q35. What is Power BI Designer?

Ans: It is a stand alone application where we can make Power BI reports and then upload it to Powerbi.com, it does not require Excel. Actually, it is a combination of Power Query, Power Pivot, and Power View.

Q36. Can we refresh our Power BI reports once uploaded to cloud (Share point or Powebi.com)?

Ans: Yes we can refresh our reports through Data Management gateway(for sharepoint), and Power BI Personal gateway(for Powerbi.com)

Q37. What are the different types of refreshing data for our published reports?

Ans: There are four main types of refresh in Power BI. Package refresh, model or data refresh, tile refresh and visual container refresh.

  • Package refresh

This synchronizes your Power BI Desktop, or Excel, file between the Power BI service and OneDrive, or SharePoint Online. However, this does not pull data from the original data source. The dataset in Power BI will only be updated with what is in the file within OneDrive, or SharePoint Online.

  • Model/data refresh

It referrs to refreshing the dataset, within the Power BI service, with data from the original data source. This is done by either using scheduled refresh, or refresh now. This requires a gateway for on-premises data sources.

  • Tile refresh

Tile refresh updates the cache for tile visuals, on the dashboard, once data changes. This happens about every fifteen minutes. You can also force a tile refresh by selecting the ellipsis (…) in the upper right of a dashboard and selecting Refresh dashboard tiles.

  • Visual container refresh

Refreshing the visual container updates the cached report visuals, within a report, once the data changes.

Q38. Is Power BI available on-premises?

No, Power BI is not available as a private, internal cloud service. However, with Power BI and Power BI Desktop, you can securely connect to your own on-premises data sources. With the On-premises Data Gateway, you can connect live to your on-premises SQL Server Analysis Services, and other data sources. You can also scheduled refresh with a centralized gateway. If a gateway is not available, you can refresh data from on-premises data sources using the Power BI Gateway – Personal.

 

 

Q39. What is data management gateway and Power BI personal gateway?

Gateway acts a bridge between on-premises data sources and Azure cloud services.

Personal Gateway:

  • Import Only, Power BI Service Only, No central monitoring/managing.
  • Can only be used by one person (personal); can’t allow others to use this gateway.

On-Premises Gateway:

  • Import and Direct Query supported.
  • Multiple users of the gateway for developing content.
  • Central monitoring and control.

 

 

Q40. What is Power BI Q&A?

Power BI Q&A is a natural language tool which helps in querying your data and get the results you need from it. You do this by typing into a dialog box on your Dashboard, which the engine instantaneously generates an answer similar to Power View. Q&A interprets your questions and shows you a restated query of what it is looking from your data. Q&A was developed by Server and Tools, Microsoft Research and the Bing teams to give you  a complete feeling of truly exploring your data.

41). What are some ways that Excel  experience can be leveraged with Power BI?

Below are some of the ways through which we can leverage Power BI:

  • The Power BI Publisher for Excel:
    • Can be used to pin Excel items (charts, ranges, pivot tables) to Power BI Service.
    • Can be used to connect to datasets and reports stored in Power BI Service.
  • Excel workbooks can be uploaded to Power BI and viewed in the browser like Excel Services.
  • Excel reports in the Power BI service can be shared via Content Packs like other reports.
  • Excel workbooks (model and tables) can be exported to service for PBI report creation.
  • Excel workbook Power Pivot models can be imported to Power BI Desktop models.

 

 

Q42. What is a calculated column in Power BI and why would you use them?

Calculated Columns are DAX expressions that are computed during the model’s processing/refresh process for each row of the given column and can be used like any other column in the model.

Calculated columns are not compressed and thus consume more memory and result in reduced query performance. They can also reduce processing/refresh performance if applied on large fact tables and can make a model more difficult to maintain/support given

that the calculated column is not present in the source system.

 

 


Q43. How is data security implemented in Power BI ?

Power BI can apply Row Level Security roles to models.

  • A DAX expression is applied on a table filtering its rows at query time.
  • Dynamic security involves the use of USERNAME functions in security role definitions.
  • Typically a table is created in the model that relates users to specific dimensions and a role.

 

Q44. What are many-to-many relationships and how can they be addressed in Power BI ?

Many to Many relationships involve a bridge or junction table reflecting the combinations of two dimensions (e.g. doctors and patients). Either all possible combinations or those combinations that have occurred.

  • Bi-Directional Crossfiltering relationships can be used in PBIX.
  • CROSSFILTER function can be used in Power Pivot for Excel.
  • DAX can be used per metric to check and optionally modify the filter context.

 

 

Q45. Why might you have a table in the model without any relationships to other tables?

There are mainly 2 reasons why we would have tables without relations in our model:

  • A disconnected table might be used to present the user with parameter values to be exposed and selected in slicers (e.g. growth assumption.)
    • DAX metrics could retrieve this selection and use it with other calculations/metrics.
  • A disconnected table may also be used as a placeholder for metrics in the user interface.
    • It may not contain any rows of data and its columns could be hidden but all metrics are visible.

 


46). What is the Power BI Publisher for Excel?

You can use Power BI publisher for Excel to pin ranges, pivot tables and charts to Power BI.

  • The user can manage the tiles – refresh them, remove them, in Excel.
  • Pinned items must be removed from the dashboard in the service (removing in Excel only deletes the connection).
  • The Power BI Publisher for Excel can also be used to connect from Excel to datasets that are hosted in the Power BI Service.
  • An Excel pivot table is generated with a connection (ODC file) to the data in Azure.

 

 

Q47. What are the differences between a Power BI Dataset, a Report, and a Dashboard?

Dataset: The source used to create reports and visuals/tiles.

  • A data model (local to PBIX or XLSX) or model in an Analysis Services Server
  • Data could be inside of model (imported) or a Direct Query connection to a source.

Report: An individual Power BI Desktop file (PBIX) containing one or more report pages.

  • Built for deep, interactive analysis experience for a given dataset (filters, formatting).
  • Each Report is connected to atleast one dataset
  • Each page containing one or more visuals or tiles.

Dashboard: a collection of visuals or tiles from different reports and, optionally, a pinned.

  • Built to aggregate primary visuals and metrics from multiple datasets.

 

 


Q48. What are the three Edit Interactions options of a visual tile in Power BI Desktop?

The 3 edit interaction options are  Filter, Highlight, and None.

Filter: It completely filter a visual/tile based on the filter selection of another visual/tile.

Highlight: It highlight only the related elements on the visual/tile, gray out the non-related items.

None: It ignore the filter selection from another tile/visual.

 

Q49. What are some of the differences in report authoring capabilities between using a live or direct query connection such as to an Analysis Services model, relative to working with a data model local to the Power BI Desktop file?

With a data model local to the PBIX file (or Power Pivot workbook), the author has full control over the queries, the modeling/relationships, the metadata and the metrics.

With a live connection to an Analysis Services database (cube) the user cannot create new metrics, import new data, change the formatting of the metrics, etc – the user can only use the visualization, analytics, and formatting available on the report canvas.

With a direct query model in Power BI to SQL Server, for example, the author has access to the same features (and limitations) available to SSAS  Direct Query mode.

  • Only one data source (one database on one server) may be used, certain DAX functions are not optimized, and the user cannot use Query Editor functions that cannot be translated into SQL statements.

 


Q50. How does SSRS integrate with Power BI?

Below are some of the way through which SSRS can be integrated with Power BI:

  • Certain SSRS Report items such as charts can be pinned to Power BI dashboards.
  • Clicking the tile in Power BI dashboards will bring the user to the SSRS report.
  • A subscription is created to keep the dashboard tile refreshed.
  • Power BI reports will soon be able to be published to SSRS portal

The Data Monk services

We are well known for our interview books and have 70+ e-book across Amazon and The Data Monk e-shop page . Following are best-seller combo packs and services that we are providing as of now

  1. YouTube channel covering all the interview-related important topics in SQL, Python, MS Excel, Machine Learning Algorithm, Statistics, and Direct Interview Questions
    Link – The Data Monk Youtube Channel
  2. Website – ~2000 completed solved Interview questions in SQL, Python, ML, and Case Study
    Link – The Data Monk website
  3. E-book shop – We have 70+ e-books available on our website and 3 bundles covering 2000+ solved interview questions. Do check it out
    Link – The Data E-shop Page
  4. Instagram Page – It covers only Most asked Questions and concepts (100+ posts). We have 100+ most asked interview topics explained in simple terms
    Link – The Data Monk Instagram page
  5. Mock Interviews/Career Guidance/Mentorship/Resume Making
    Book a slot on Top Mate

The Data Monk e-books

We know that each domain requires a different type of preparation, so we have divided our books in the same way:

1. 2200 Interview Questions to become Full Stack Analytics Professional – 2200 Most Asked Interview Questions
2.Data Scientist and Machine Learning Engineer -> 23 e-books covering all the ML Algorithms Interview Questions
3. 30 Days Analytics Course – Most Asked Interview Questions from 30 crucial topics

You can check out all the other e-books on our e-shop page – Do not miss it


For any information related to courses or e-books, please send an email to nitinkamal132@gmail.com

Statistics Interview Questions

Q1. What is a Sample?

A. A data sample is a set of data collected and the world selected from a statistical population by a defined procedure. The elements of a sample are known as sample points, sampling units or observations.

Q2. Define Population.

A. In statistics, population refers to the total set of observations that can be made. For example, if we are studying the weight of adult women, the population is the set of weights of all the women in the world

Q3. What is a Data Point?

A. In statistics, a data point (or observation) is a set of one or more measurements on a single member of a statistical population.

Q4. Explain Data Sets.

A. Data sets usually come from actual observations obtained by sampling a statistical population, and each row corresponds to the observations on one element of that population. Data sets may further be generated by algorithms for the purpose of testing certain kinds of software.

Q5. What is meant by the term Inferential Statistics?

A. Inferential statistics use a random sample of data taken from a population to describe and make inferences about the population. Inferential statistics are valuable when examination of each member of an entire population is not convenient or possible.

Q6. Give an example of Inferential Statistics

  A. You asked five of your classmates about their height. On the basis of this information, you stated that the average height of all students in your university or college is 67 inches.

Q7. What is Descriptive Statistics?

A. Descriptive statistics are brief descriptive coefficients that summarize a given data set, which can be either a representation of the entire or a sample of a population. Descriptive statistics are broken down into measures of central tendency and measures of variability (spread).

Q8. What is the range of data?

A1. It tells us how much the data is spread across in a set. In other words, it is defined as the difference between the highest and the lowest value present in the set.

X=[2 3 4 4 3 7 9]

Range(x)%return (9-2)=7

Q9. Define Measurement.

A. Data can be classified as being on one of four scales: 

  • nominal
  • ordinal
  • interval
  • ratio

Each level of measurement has some important properties that are useful to know. For example, only the ratio scale has meaningful zeros.

Q10. What is a Nominal Scale?

A. Nominal variables (also called categorical variables) can be placed into categories. They don’t have a numeric value and so cannot be added, subtracted, divided or multiplied. They also have no order; if they appear to have an order then you probably have ordinal variables instead

Q11. What is an Ordinal Scale?

A. The ordinal scale contains things that you can place in order. For example, hottest to coldest, lightest to heaviest, richest to poorest. Basically, if you can rank data by 1st, 2nd, 3rd place (and so on), then you have data that’s on an ordinal scale.

Q12. What is an Interval Scale?

A. An interval scale has ordered numbers with meaningful divisions. Temperature is on the interval scale: a difference of 10 degrees between 90 and 100 means the same as 10 degrees between 150 and 160. Compare that to high school ranking (which is ordinal), where the difference between 1st and 2nd might be .01 and between 10th and 11th .5. If you have meaningful divisions, you have something on the interval scale.

Q13. Explain Ratio Scale.

A. The ratio scale is exactly the same as the interval scale with one major difference: zero is meaningful. For example, a height of zero is meaningful (it means you don’t exist). Compare that to a temperature of zero, which while it exists, it doesn’t mean anything in particular.

Q14. What do you mean by Bayesian?

A. Bayesians condition on the data observed and considered the probability distribution on the hypothesis. Bayesian statistics provides us with mathematical tools to rationally update our subjective beliefs in light of new data or evidence.

Q15. What is Frequentist?

A. Frequentists condition on a hypothesis of choice and consider the probability distribution on the data, whether observed or not. Frequentist statistics uses rigid frameworks, the type of frameworks that you learn in basic statistics, like:

Q16. What is P-Value??

A. In statistical significance testing, it is the probability of obtaining a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true.

Q17. What is a Confidence Interval?

A. A confidence interval, in statistics, refers to the probability that a population parameter will fall between two set values for a certain proportion of times. Confidence intervals measure the degree of uncertainty or certainty in a sampling method.

Q18. Explain Hypothesis Testing.

A. Hypothesis testing is an act in statistics whereby an analyst tests an assumption regarding a population parameter. The methodology employed by the analyst depends on the nature of the data used and the reason for the analysis. Hypothesis testing is used to infer the result of a hypothesis performed on sample data from a larger population.

Q19. What is likelihood?

A. The probability of some observed outcomes given a set of parameter values is regarded as the likelihood of the set of parameter values given the observed outcomes.

Q20. What is sampling?

A. Sampling is that part of statistical practice concerned with the selection of an unbiased or random subset of individual observations within a population of individuals intended to yield some knowledge about the population of concern.

Q21. What are Sampling Methods?

A. There are 4 sampling methods:

  • Simple Random
  • Systematic
  • Cluster
  • Stratified

Q22. What is Mode?

A. The mode of a data sample is the element that occurs the most number of times in the data collection.

       X=[1 2 4 4 4 4 5 5]

       Mode(x)% return 3

Q23. What is Median?

A. It is describes as the numeric value that separates the lower half of sample of a probability from the upper half. It can b easily calculated by arranging all the samples from highest to lowest (or vice-versa) and picking the middle one.

      X=[2 4 1 3 4 4 3]

      X=[1 2 3 3 4 4 4]

      Median(x)% return 3

Q24. What is meant by Quartile?

A. It is a type of quantile that divides the data points into four or less equal parts(quarters). Each quartile contains 25% of the total observations. Generally, the data is arranged from smallest to largest.

Q25. What is Moment?

A. It is the quantitative measure of the shape of a set of points. It comprises of a set of statistical parameters to measure a distribution. Four moments are commonly used:

  • Mean
  • Skewness
  • Variance
  • Kurtosis

Q26. What is the Mean of data?

A. The statistical mean refers to the mean or average that is used to derive the central tendency of the data in question. It is determined by adding all the data points in a population and then dividing the total by the number of points.

X=[1 2 3 3  6]

Sum=1+2+3+3+6=15

Mean(x)%return (sum/5)=3

Q27. Define Skewness.

A. Skewness is a measure of the asymmetric of the data around the sample mean. It it is negative, the data are spread out more to the left side of the mean than to the right. The vice-versa also stands true.

Q28. What is Variance?

A. It describes how far the value lies from the Mean. A small variance indicates that the data points tend to be very close to the mean, and to each other. A high variance indicates that the data points are very spread out from the mean, and from one another. Variance is the average of the squared distances from each point to the mean.

Q29. Define Standard Deviation.

A. In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean of the set, while a high standard deviation indicates that the values are spread out over a wider range.

Q30. What is Kurtosis?

A. Kurtosis is a measure of how outlier-prone a distribution is. In other words, kurtosis identifies whether the tails of a given distribution contain extreme values.

Q31. What is meant by Covariance?

A. Covariance measures the directional relationship between the returns on two assets. A positive covariance means that asset returns move together while a negative covariance means they move inversely. Covariance is calculated by analyzing at-return surprises (standard deviations from the expected return) or by multiplying the correlation between the two variables by the standard deviation of each variable. It gives the measure of how much two variable change together.

Q32. What is Alternative Hypothesis?

A. The Alternative hypothesis (denoted by H1 ) is the statement that must be true if the null hypothesis is false.

Q33. Explain Significance Level.

A. The probability of rejecting the null hypothesis when it is called the significance level α , and very common choices are α = 0.05 and α = 0.01.

Q34. Do you know what is Binary search?

A. For binary search, the array should be arranged in ascending or descending order. In each step, the algorithm compares the search key value with the key value of the middle element of the array. If the keys match, then a matching element has been found and its index, or position, is returned. Otherwise, if the search key is less than the middle element’s key, then the algorithm repeats its action on the sub-array to the left of the middle element or, if the search key is greater, on the sub-array to the right.

Q35. Explain Hash Table.

A. A hash table is a data structure used to implement an associative array, a structure that can map keys to values. A hash table uses a hash function to compute an index into an array of buckets or slots, from which the correct value can be found.

Q36. What is Null Hypothesis?

A. The null hypothesis (denote by H0 ) is a statement about the value of  a population parameter (such as mean), and it must contain the condition of equality and must be written with the symbol =, ≤, or ≤.

Q37. When You Are Creating A Statistical Model How Do You Prevent Over-fitting?

A.  It can be prevented by cross-validation

Q38. What do you mean by Cross-vlidation?

A. Cross-validation, it’s a model validation techniques for assessing how the results of a statistical analysis (model) will generalize to an independent data set. It is mainly used in settings where the goal is prediction, and one wants to estimate how accurately a predictive model will perform in practice

Q39. What is Linear regression?

A. A linear regression is a good tool for quick predictive analysis: for example, the price of a house depends on a myriad of factors, such as its size or its location. In order to see the relationship between these variables, we need to build a linear regression, which predicts the line of best fit between them and can help conclude whether or not these two factors have a positive or negative relationship.

Q40. What are the assumptions required for linear regression?

A. There are four major assumptions:

  1. There is a linear relationship between the dependent variables and the regressors, meaning the model you are creating actually fits the data
  2.  The errors or residuals of the data are normally distributed and independent from each other
  3.  There is minimal multi-co linearity between explanatory variables
  4.  Homoscedasticity. This means the variance around the regression line is the same for all values of the predictor variable.

Q41. What is Multiple Regression?

A. Multiple regression generally explains the relationship between multiple independent or predictor variables and one dependent or criterion variable.  A dependent variable is modeled as a function of several independent variables with corresponding coefficients, along with the constant term.  Multiple regression requires two or more predictor variables, and this is why it is called multiple regression.

Q42. What is a Statistical Interaction?

A. Basically, an interaction is when the effect of one factor (input variable) on the dependent variable (output variable) differs among levels of another factor.

Q43. What is an example of a data set with a non-Gaussian distribution?

A.The Gaussian distribution is part of the Exponential family of distributions, but there are a lot more of them, with the same sort of ease of use, in many cases, and if the person doing the machine learning has a solid grounding in statistics, they can be utilized where appropriate.

Q44. Define Correlation.

A. Correlation is a statistical technique that can show whether and how strongly pairs of variables are related.

For example: height and weight are related; taller people tend to be heavier than shorter people. The relationship isn’t perfect. People of the same height vary in weight, and you can easily think of two people you know where the shorter one is heavier than the taller one. Nonetheless, the average weight of people 5’5” is less than the average weight of people 5’6”, and their average weight is less than that of people 5’7”, etc.

Correlation can tell you just how much of the variation in peoples’ weights is related to their heights.

Q45. What is primary goal of A/B Testing?

A. A/B testing refers to a statistical hypothesis with two variables A and B. The primary goal of A/B testing is the identification of any changes to the web page for maximizing or increasing the outcome of interest. A/B testing is a fantastic method for finding the most suitable online promotional and marketing strategies for the business.

Q46. What is meaning of Statistical Power of Sensitivity?

A. The statistical power of sensitivity refers to the validation of the accuracy of a classifier, which can be Logistic, SVM, Random Forest, etc. Sensitivity is basically Predicted True Events/Total Events.

Q47. Explain Over-fitting.

A. In the case of over-fitting, the model is highly complex, like having too many parameters which are relative to many observations. The overfit model has poor predictive performance, and it overreacts to many minor fluctuations in the training data.

Q48. Explain Under-fitting

A. In the case of under-fitting, the underlying trend of the data cannot be captured by the statistical model or even the machine learning algorithm. Even such a model has poor predictive performance.

Q49. What is Long Format Data?

A. In the long format, every row makes a one-time point per subject. The data in the wide format can be recognized by the fact that the columns are basically represented by the groups.

Q50. What is Wide Format Data?

A. In the wide format, the repeated responses of the subject will fall in a single row, and each response will go in a separate column.

How much is the annual income of a beggar in Bangalore?

You can assume anything and everything under the Sun, just to try to keep the assumptions close to reality
I always start with an equation, for this question the equation which I assumed was:-

Amount per day * Number of Calendar Days (365)

Assumption 1- A beggar begs all day of the year
Now, I have divided a complete day in 4 parts
6 am to 10 am – High income
10 am to 4 pm – Low income
4 pm to 10 pm – High income
10 pm to 6 am  – No income 

Assumption 2 – The beggar will bet more money in slot 1 and 3
Assumption 3 – Beggar interacts with 500 people in each slot
Assumption 4 – The success ratio table

SlotSuccess RateNumber of people giving money
6 AM – 10 AM0.0345
10 AM – 4 PM0.0115
4 PM – 10 PM0.0575
10 PM – 6 AM0.0069
144

Assumption 5 – Probability of amount, I have safely assumed that 30% people will give Rs.2, 20% will give Rs.5 and 50% will give Rs.1

SlotSuccess RateNumber of people giving moneyAmount
6 AM – 10 AM0.034594.5
10 AM – 4 PM0.011531.5
4 PM – 10 PM0.0575157.5
10 PM – 6 AM0.006918.9
144302.4

Now we have Rs.302.4 per day income.
Annual amount = 302.4*365 =  Rs. 110,376 It doesn’t matter if the amount is high or low, what matters is that you have an approach to solve the problem. Few more things which you can add here are:-
1. Divide the year into seasons
2. Divide year into weekend and weekdays
3. Public Holidays

Keep Learning 🙂

Nitin Kamal