A data structure is a way to organize information. Just as a carpenter has tools to make their job easier, programmers have different ways of organizing data for the purposes of processing.

Data structures are used for many things in programming, including storing and accessing data, creating new kinds of objects from old ones (encapsulation), or even just passing around information between functions or methods. Different programming languages have different built-in data types that come with their own built-in behaviors but don’t necessarily represent all possible types of objects in the real world (e.g., there are no “carpenters”). In order to model how objects interact with each other in our applications we must create our own classes or structures so that they behave like those we want them too!


Before you start, it's important to understand what data structures are and how they're used. Data structures are abstractions that allow us to store and manipulate information in a computer program. They're an essential part of computer science because they help us solve problems faster by organizing data into groups.

Data structures can be classified as either mutable or immutable. A mutable type of structure supports changes (such as adding) to its elements after creation; an immutable type does not support this behavior. The most common types of data structure are arrays, linked lists, stacks, queues, heaps and graphs - each one has its own advantages and disadvantages depending on the problem at hand!

Basics of Algorithms, Complexities and Data Structures

In this section, we will learn:

  • What an algorithm is and why they are important.

  • What a complexity is and what it means when we talk about algorithms having different complexities.

  • The difference between linked lists, arrays and stacks in terms of implementation and usage scenarios.


Arrays are a data structure that stores a collection of values of the same type. They're useful for storing a fixed number of values, such as the names of 10 students or the favorite colors of 10 different people.

Arrays are indexed by an integer, which is also called their index or subscript. The first element in an array has an index value of 0, the second element has an index value of 1, etc., so if we have an array containing 100 elements then our last index would be 99 (because arrays start with 0).

Arrays can contain any type; so if you wanted to store integers and strings at different indexes within your array then you could do this by declaring multiple types within your variable declaration:

Linked Lists

Linked lists are linear data structures that use pointers to store data. They are used to store data sequentially and in a dynamic manner.

Linked lists are useful when you have a very large amount of data that needs to be stored, but only need access to the most recent piece of data at any point in time. If your application is going to need access to items at random points throughout the list, then it could be worth using an array instead since arrays don't require you keep track of where things are in relation to one another (although this can take up more memory).

Doubly Linked List

A Doubly Linked List (sometimes abbreviated DLL) is a linked list where each node has a reference to both the previous and next nodes. The following figure shows the structure of a doubly linked list:


As you can see from the above image, every node in a doubly linked list contains two references – one for its predecessor and one for its successor. This allows us to add new elements anywhere in our array without having to move any other elements around, which would be necessary if we were using an ordinary singly-linked list instead of this more complicated data structure!

Stacks, Queues and Deque

A stack is a linear data structure that follows the last-in, first-out (LIFO) principle. You can visualize it as a pile of papers on your desk, where the most recent paper was placed on top of all other papers. When you want to access any particular piece of paper from this stack, you simply remove the most recent one from its place at the top of your desk. Similarly in a stack data structure, when you want to access any element from it (say 34), then you simply pop out everything that is above 34 and get 34 as output.

Hash Table/Maps

An important feature of hash tables is that they are implemented using a hash function, which maps an element to a bucket. The keys are mapped to the elements in the buckets by applying this function on them:

Hash tables have good average case performance but poor worst case performance when it comes to searching for an element. For example, if you wanted to find an element in a hashtable with 100 elements given that its key equals "apple", then you would start by calculating what bucket it belongs in (e.g., 1337). Then, you would check each element in this bucket until you find one whose key is equal to "apple". You'd have no way of knowing how many steps it might take for your algorithm until after completing all steps necessary for finding it!

Heaps & Priority Queue

Priority Queue

A Priority Queue is an efficient data structure that can be used to implement the operations of a priority queue.

Heap (heap data structure)

A heap is a binary tree with the following properties:* The value at every node is higher than or equal to all values in its left subtree and lower than or equal to all values in its right subtree (a property called "max-heapness")* The tree is complete, meaning that every level is fully populated with nodes (a property called “node-strictness”).

Sorting Algorithms (Bubble Sort, Insertion Sort, Selection Sort and Merge Sort)

Sorting is one of the most fundamental algorithms in computer science. It's used in many data structures and operations, including binary search trees and graphs. There are many different sorting algorithms, with each having its own advantages and disadvantages.

Here are four common sorting algorithms:

  • Bubble Sort: This is one of the simplest sorting algorithms to understand--it compares adjacent items on a list, swaps them if they're out of order and repeats until they're in order. Because it doesn't check every item on the list, this algorithm can be slow when there are many duplicate values within a large set of data.

  • Insertion Sort: This algorithm uses two temporary arrays (or lists) to store items as they're sorted by their keys into a final array. For example, let's say we want to sort numbers from 1-10 into an array called "myNumbers." We could start by placing our first number (1) at index 0 (the leftmost position), then our second number (4) at index 1 up until we reach 9 at index 8 (leaving 10 unplaced). Then we would move on to our second pass where we would put 9 into 10's spot since 9 comes before 10 alphabetically; once again starting over with 1-4 but now placing them at indices 5-9 instead of 0 through 4 respectively because 6 comes before 7 alphabetically so we'd want those two numbers closer together in such case scenario where both were equal." This process continues until all values have been placed correctly within their respective positions within the final array! The advantage here is that this method requires less space than bubble sort while still maintaining good performance."

Binary Search and Advance Searching Algorithms (Ternary Search and Jump Search)

In this section you will learn about binary search and advanced searching algorithms. You will also learn about the search space and its complexity.

Binary Search: Binary search is an algorithm that finds an element in a collection of elements by dividing that collection into two smaller, sorted sublists (called "bins" or "buckets"), such that each element is greater than or equal to any element in its left bin, but less than any element in its right bin; then it recursively repeats this process on each bin until either the target value has been found or it can be determined with certainty that it does not exist in the input list.


Trees (Binary Tree, Binary Search Tree, AVL Tree)

The binary tree is a data structure which allows us to store and access data efficiently. There are two types of binary trees:

  • Binary Search Tree (BST): This structure has the following properties:

  • Each node has zero or two children (left and right)

  • Each node is greater than or equal to its left child and less than or equal to its right child

Hashing Techniques (Collision Resolution Techniques like Separate Chaining and Open Addressing(Linear Probing))

Hashing Techniques (Collision Resolution Techniques like Separate Chaining and Open Addressing(Linear Probing))

Hash Table: A data structure that stores key-value pairs. It is used to efficiently retrieve an element by its key. The most common implementation of hash table is a hash array mapped trie (HAMT).

Hashing techniques: A data structure where the elements are stored in an n-dimensional table, called buckets, and each bucket contains elements with similar characteristics – known as keys. The goal is to have an efficient way of quickly finding all elements that have a certain value when these values are identified by their keys. This can be done by searching all the buckets one by one until you find your target element or until you know that it cannot be found in this way – what we call collisions resolution techniques like separate chaining and open addressing(linear probing).

Have fun while learning a new concept

It's important to keep in mind that there are many ways you can learn a new concept, and it's important that you find a method that works best for you.

You can use books, websites, classmates, tutors or even your own experience as resources. There are many tools available online to help you learn faster and more efficiently than ever before!

The most important thing here is to have fun while learning something new!

Previous article 9 Places to visit in South India
Next article testing blog


    There is no comments.

Please enter your comment!
Please enter your name here