In today’s competitive job market, technical interviews serve as crucial gateways to landing coveted positions in the tech industry. Whether you’re a seasoned professional or a recent graduate, preparing for these interviews is essential to showcase your skills and expertise effectively. To help you excel in your next technical interview, let’s dive into the top 7 interview questions you’re likely to encounter:
- Write code having memory leaks?
- What are higher-order functions? Write code to show the usage.
- Difference between flatMap and map.
- Write code to do two tasks in parallel.
- How does memory is managed in the OS?
- Implement LRU Cache
- What did you learn last week?
Write code having memory leaks?
class Node:
def __init__(self, data):
self.data = data
self.next = None
def create_memory_leak():
head = Node(1)
current = head
for i in range(2, 10000):
new_node = Node(i)
current.next = new_node
current = new_node
# The head is not returned, so the entire linked list will be leaked
# Call the function to create the memory leak
create_memory_leak()
In this code, we create a linked list of nodes without properly cleaning up the memory afterward. The create_memory_leak()
function creates a linked list of 10,000 nodes but doesn’t return the head
of the list, which means there’s no reference to the beginning of the list. As a result, none of the nodes in the linked list can be garbage collected, leading to a memory leak.
What are higher-order functions? Write code to show the usage.
Higher-order functions are functions that can accept other functions as arguments and/or return functions as their output. Essentially, they treat functions as data, enabling more dynamic and flexible programming paradigms. This concept is a fundamental aspect of functional programming languages like Haskell, Lisp, and JavaScript.
Also read | How to Use Truecaller AI Call Scanner to Stop Scam Calls
Here’s an example in Python:
# Define a higher-order function that takes another function as an argument
def apply_function(func, x):
return func(x)
# Define some simple functions to demonstrate usage
def square(x):
return x * x
def cube(x):
return x * x * x
# Using the higher-order function with different functions as arguments
result1 = apply_function(square, 5) # Pass the 'square' function
print("Result of applying square function:", result1)
result2 = apply_function(cube, 5) # Pass the 'cube' function
print("Result of applying cube function:", result2)
In this example, apply_function
is a higher-order function because it takes another function (func
) as an argument and applies it to the value x
. We then demonstrate its usage by passing different functions (square
and cube
) to it along with a value.
Difference between flatMap and map
The map
and flatMap
are both higher-order functions used in functional programming, particularly in languages like Scala, Kotlin, Swift, and JavaScript.
map: It’s a method that takes a function and applies it to each element in a collection, producing a new collection of the same size but with transformed elements. It preserves the structure of the original collection.
val numbers = List(1, 2, 3, 4, 5)
val doubled = numbers.map(_ * 2) // [2, 4, 6, 8, 10]
In this example, map
is applied to each element of the list, doubling each value.
flatMap: It’s similar to map
, but it flattens the result. It’s often used when the transformation function returns a collection itself. It applies the function to each element in the collection and then flattens the result into a single collection.
val nestedNumbers = List(List(1, 2), List(3, 4), List(5))
val flattened = nestedNumbers.flatMap(_.map(_ * 2)) // [2, 4, 6, 8, 10]
Here, flatMap
is used to first apply the inner map
operation to each inner list, doubling each value. Then, it flattens the resulting nested lists into a single list.
Write code to do two tasks in parallel.
We can use Python’s multiprocessing
module to run tasks in parallel. Here’s a simple
import multiprocessing
import time
def task1():
print("Starting task 1...")
time.sleep(3) # Simulating some time-consuming task
print("Task 1 completed!")
def task2():
print("Starting task 2...")
time.sleep(2) # Simulating another time-consuming task
print("Task 2 completed!")
if __name__ == "__main__":
# Create two processes for each task
process1 = multiprocessing.Process(target=task1)
process2 = multiprocessing.Process(target=task2)
# Start both processes
process1.start()
process2.start()
# Wait for both processes to finish
process1.join()
process2.join()
print("Both tasks completed successfully!")
In this example, task1
and task2
are executed in parallel using separate processes created by multiprocessing.Process
. The join()
method is then called on each process to wait for them to finish before proceeding further.
How does memory is managed in the OS?
Memory management in operating systems involves several processes like allocation, tracking, and deallocation of memory. OS allocates memory space to different processes, manages virtual memory, and ensures that each process gets enough memory to execute without interfering with others. Techniques like paging, segmentation, and demand paging are used for efficient memory management.
Implement LRU Cache
Here’s a simple implementation of an LRU (Least Recently Used) cache in Python:
class LRUCache:
def __init__(self, capacity):
self.capacity = capacity
self.cache = {}
self.usage_order = []
def get(self, key):
if key in self.cache:
# Move the accessed key to the end to represent it's the most recently used
self.usage_order.remove(key)
self.usage_order.append(key)
return self.cache[key]
else:
return -1
def put(self, key, value):
if key in self.cache:
# Update the value and move the key to the end
self.cache[key] = value
self.usage_order.remove(key)
self.usage_order.append(key)
else:
if len(self.cache) >= self.capacity:
# If the cache is full, remove the least recently used key
lru_key = self.usage_order.pop(0)
del self.cache[lru_key]
# Add the new key-value pair
self.cache[key] = value
self.usage_order.append(key)
# Example usage:
cache = LRUCache(2)
cache.put(1, 1)
cache.put(2, 2)
print(cache.get(1)) # Output: 1
cache.put(3, 3) # evicts key 2
print(cache.get(2)) # Output: -1 (not found)
cache.put(4, 4) # evicts key 1
print(cache.get(1)) # Output: -1 (not found)
print(cache.get(3)) # Output: 3
print(cache.get(4)) # Output: 4
This implementation maintains a dictionary cache
to store key-value pairs and a list usage_order
to track the usage order of keys. When a key is accessed via get
method, it’s moved to the end of usage_order
to represent its most recently used. When a new key-value pair is added via put
method, if the cache is already full, it removes the least recently used key from both cache
and usage_order
, and then adds the new key-value pair.
What did you learn last week? in python
Last week, I learned about Python decorators and how they can be used to modify or extend the behavior of functions or methods. I explored different use cases for decorators such as logging, authentication, and performance monitoring. Additionally, I delved into some advanced topics like class decorators and nested decorators, which provide even more flexibility in Python code.