Want to create interactive content? It’s easy in Genially!

Get started free

2436 Data Structures and Algorithms

Ashley Andruss

Created on July 31, 2024

Start designing with a free template

Discover more than 1500 professional designs like these:

Transcript

Developed by:ashley andruss, can ercan, & sami hamdalla

cmap2: implementation

cmap1:concept

dynamic arrays

cmap2: implementation

cmap1:concept

cmap2: implementation

cmap1:concept

GRAPHS

Recursion

linked lists

TREES

encompasses

cmap1:concept

cmap2: implementation

Data Structures and Algorithms

cmap1:concept

cmap2: implementation

HEAPS

big o notation

encompasses

cmap1:concept

cmap2: implementation

cmap1:concept

cmap2: implementation

STACKS

HASHING

QUEUES

cmap1:concept

cmap1:concept

cmap2: implementation

cmap2: implementation

cmap2: implementation

cmap1:concept

What is a concept map?

Nodes correspond to the concepts or important terms related to your studies of a topic. For example, the concept "water" can be defined by other concepts, such as liquid, solid, and gas. The relationship of each concept to other concepts determines its meaning. Thus, a concept map is a set of relationships among other concepts. Labeled links identify the type of relationship. Therefore, the line between a pair of concepts denotes a relationship, and the label on the line tells how the two concepts are related. For example, in a concept map of seasons, the relationship between the amount of sunlight and temperature variations is labeled as "cause" – in other words it is an action relationship between antecedent and consequent. In a concept map of dairy policies, the relationship between "dairy policy" and "federal milk marketing order" is labeled as "includes" because it represents an inclusion relationship between superset and subset.

https://pennstatelearning.psu.edu/istudy_tutorials/conceptmaps/conceptmaps_print.html

computer science

why do amap first?

  • Help me to visualize what I already know about a topic. They guide my study and research about that topic and can enhance meaningful and deeper learning.
  • To construct a concept map, you need to determine important concepts and the relationships between these concepts. By doing that, you explore your understanding of the topic. In other words, you examine and reflect on what you know about the topic.
  • This fundamental understanding enhances the ability to build optimized and accurate code, leading to a comprehensive program to solve complex problems.
A Life Rule of Thumb: You need to understand the problem before you try to build the solution to fix the problem.

https://pennstatelearning.psu.edu/istudy_tutorials/conceptmaps/conceptmaps_print.html | https://gogeometry.com/mindmap/academic_computer_science.html

Main menu

cmap 1: concept

Dynamic Arrays

asks the question of

how

where

when

why

what

leading to the

can this be used in

explaining

it is used in

can this be applied with the

operations

process

applications

definition

problems solved

structure

advantages

disadvantages

scenarios

including a(n)

for

with

including

including

having a(n)

such as

step 1

step 2

dynamic data structures

collection of elements

unknown dataset size

underlying fixed-sizearray

capacity variable

frequent insertionor deletion

total allocated space

resizing operations new array

initialization

efficient random accessO(1)

staticarray limitations

searchalgorithms

execute operations steps

deletion

sortingalgorithms

insertion

access

search

size variable

for number of elements

print

resize

such as

such as

such as

that have key characteristics of

such as

insertionsort

quick selection

binarysearch

dynamic graphs

linked lists

accessedusing indices

dynamic resizing

static array fixed size

simplified operations

bubble sort

linear search

hash tables

quicksort

selection sort

stacks

contiguous blocks ofmemory

queues

mergesort

heapsort

bucket sort

go to implementation

define the array

with step 1

cmap 2: implementation

Main menu

recursive process

create recursive case

Dynamic Arrays

with step 2

canhave a

define base case

with step 3

define recursive call

with step 4

call recursive function

with step 5

has operations such as

resize

insertion

search

print

access

deletion

encompassing a

encompassing a

encompassing a

encompassing a

encompassing a

encompassing a

question

process

process

process

question

question

question

process

question

process

question

process

such as

following

such as

following

such as

following

such as

such as

following

following

following

such as

Access Steps

Resize Steps

Print Steps

Print entire array or up to index?

What is the New Capacity?

What Index Do You Want?

Deletion Location

Search Location

Insertion Location

Where to Delete?

Where to Search?

Where to Insert?

How should the output be formatted?

Which Element to Search For?

Which Element to Delete?

Is the Index In Bounds?

What is the Current Capacity?

Is There Capacity?

Resize Array Now or Until Necessary?

How to Shift Elements?

to decide

to decide

to decide

Insert at End?

Insert at Beginning?

Insert at Specific Index?

Delete at Specific Index?

Delete at Beginning?

Delete at End?

Linear Search?

Binary Search?

step 1

step 1

step 1

step 1

step 1

step 1

step 1

step 1

step 1

step 1

step 1

Validate Index0 <= index < size

Check capacity & if array needs resizing

Determine New Capacity Ex: new_cap = cap *2

Check if array is empty

Start from 1st ElementSet index pointer to 0

Check capacity & if array needs resizing

Check capacity & if array needs space

Check if array is empty

Start From 1st Element index pointer to 0

Check if array is empty

Initialize Pointers0 to size - 1

step 2

step 2

step 2

step 2

step 2

step 2

step 2

step 2

step 2

step 2

step 2

Allocate New Array new_cap

Iterate Through Array loop elements up to size

Validate index Is within bounds 0 to size - 1

Shift elements to the rightsize - 1 to 0

Iterate Through Array Compare target value; increment

Access Elementreturn index

Iterate Through Search mid = (low+high)/2

Decrement size of array

Insert new element at size index

Shift elements to the leftindex 1 to size - 1

Validate index Is specific index within 0 to size (out of bounds)

step 3

step 3

step 3

step 3

step 3

step 3

step 3

step 3

step 3

Shift elements from specific index to left

Copy Elements old array to new array

Format Each Element E.g., comma-separated values

Check Array Midpoint Values mid; mid-1; mid+1

Decrement size of array

Check Each Element Match? Return index

Shift elements from specific index to right

Increment size of array by 1

Insert element index 0

step 4

step 4

step 4

step 4

step 4

step 4

step 4

Update size decrease by 1

End Search return -1 if not found

Print Elements Output into specified format

Update Reference Original array referance point to new array

Repeat Until Found low; high; low > high

Insert element at specific index

Increment size of array by 1

step 5

step 5

step 5

step 5

End Print Include new line or termination

End Search return -1 if not found

Increment size of array by 1

Deallocate Memory used by old array

INTRODUCTION HERE

An awesome title

Describe the problem you’re going to resolve and, above all, the reason why your idea is interesting

Main menu

cmap 1: concept

Recursion

asks the question of

how

when

what

where

why

leading to the

can this be used in

explaining

it is used in

can this be applied with the

operations

process

applications

definition

problems solved

structure

advantages

disadvantages

scenarios

including a

for

such as

including

including

having a(n)

such as

step 1

step 2

base case (stop recursion)

tree traversals

function that calls itself

nested structures

basecase

repetitive subtasks

memory intensive

recursive case

simplifies complex problems

fit for hierarchial data

tree traversal & graphs

searchalgorithms

recursive case (calls function)

sortingalgorithms

function calls itself

graphs

linked lists

stack overflow risk

fibonacci series

factorial & gcd

such as

to solve

such as

such as

towers of hanoi

divide and conquer

quicksort

heapsort

smaller instances of the same problem

depth-first search

pre-order

in-order

binary search

mergesort

post-order

go to implementation

cmap 2: implementation

Main menu

Recursion

components such as

recursive case

base case

steps

following

encompassing a

encompassing a

question

process

process

question

step 4

step 1

step 2

step 3

following

such as

as

such as

as

as

following

as

Ensure Termination (base case)

When should recursion stop?

Define the simplest problem (e.g., n== 0 in factorial)

What step reduces the problem size?

Call function with smaller input

Identify Recursive Case

Implement Function Call

Identify Base Case

INTRODUCTION HERE

An awesome title

Describe the problem you’re going to resolve and, above all, the reason why your idea is interesting

Main menu

cmap 1: concept

Linked List

asks the question of

how

when

where

why

what

leading to the

can this be used in

explaining

it is used in

can this be applied with the

applications

definition

lIMITATIONS

USE CASES

scenarios

types

structure

process

for

with

including

having a(n)

has

including a(n)

Applications that require dynamic memory allocation, such as text editors and databases.

Implementing other data structures like stacks, queues, and graphs

step 1

Efficient insertions/deletionsand dynamic memory allocation

Used when frequent insertions/deletions are needed

step 3

Slower access time compared to arrays

step 2

When random access is not needed

Nodes contain data and references to the next node.

A data structure consisting of nodes.

Singly, Doubly, Circular Linked Lists.

Memory management in operating systemsnavigation

initialization

Adding elements

Removing elements

When the number of elements is unknown or changes frequently.

Create a new node

Find the node to be removed.

Create an empty list with the head pointing to null.

Point the new node's next reference to the current head

Update the previous node's reference to skip the removed node

Update the head to be the new node

Free the memory of the removed node

go to implementation

cmap 2: implementation

Main menu

Linked List

has operations such as

Traversal

print

deletion

insertion

access

encompassing a

encompassing a

encompassing a

encompassing a

encompassing a

question

process

process

process

question

question

question

process

question

process

such as

following

such as

following

such as

following

such as

following

following

such as

What position Do You Want?

Access Steps

Deletion Location

Action

Insertion Location

Where to Delete?

Where to Insert?

Which Element to Delete?

Print Steps

How should the output be formatted?

How to traverse?

Is There Capacity?

How to Shift Elements?

to decide

to decide

to decide

Insert at End?

Insert at Beginning?

Insert at Specific Index?

Delete at Specific Index?

Delete at Beginning?

Delete at End?

Traverse

Start from head

step 1

Check if the list is empty

Check if the node is the head

Start at head

Traversing to the end

Find the correct position

Find the node to delete

Find the correct position

Print value

Start from head

step 2

Assigning new node as head

Reassigning head to next node

Assigning new node as tail

Move to next node using next

Reassigning head to next node

Remove tail node

Move to next node using next

Move to next node using next

Reassigning pointers to insert node

INTRODUCTION HERE

An awesome title

Describe the problem you’re going to resolve and, above all, the reason why your idea is interesting

Main menu

cmap 1: concept

Big O Notation

asks the question of

when

what

why

HOW

where

leading to the

can this be used in

explaining

it is used in

applications

applications

Type

definition

liMITATIONS

structure

Use cases

scenarios

SORTING ALGORITHMS

sEARCH ALGORITHMS

graph algorithms

having a(n)

including a(n)

including a(n)

with

including

having a(n)

O(1), o(n), o(n^2)...

Measures algorithm efficiency in time/space

FUNCTIONS DESCRIBE WORST-CASE SCENARIO growth.

dOESNT ACCOUNT FOR CONSTANTS AND LOWER ORDER TERMS

hELPS IN ANALYZING ALGORITHM PERFORMANCE

Operating Systems:

Game Development:

used in evaluating algorithm scalablity

breadth-first search

BINARY SEARCH

Quick Sort – O(n log n) on average but O(n^2) in the worst case with poor pivot selection.

Big O notation doesn't reflect constant factors.

Hash Table: Average O(1) lookup, but worst-case O(n) due to collisions.

Algorithms like A* for pathfinding are chosen based on Big O to ensure smooth performance even with complex environments.

Scheduling algorithms are evaluated based on Big O complexity.

QUICK SORT

MERGE SORT

bubble sort

O(n log n): Merge sort, Quick sort (average case)

O(n^2): Bubble sort, Selection sort

0(N):LINEAR SEARCH

o(log n):binary search

0(1) :HASH TABLE OPERATIONS

go to implementation

cmap 2: implementation

Main menu

Big O Notation

has operations such as

Examples

Scenarios

Time

Space

encompassing a

encompassing

encompassing a

encompassing a

Optimizing Space Usage

Memory allocation

Loops

Recursion

Recursive algorithms

Best case

Worst case

Understanding memory usage in functions

Average Case

such as

following

Graph

Search

Stack space in recursion

Sorting

Consecutive

Techniques to reduce space complexity

Identifying Recurrence Relations

Solving Recurrence Relations

Single

Nested

Linear: O(n)

Min time required under optimal conditions

BFS: O(V+E)

Bubble: O(n^2)

Max time required for worst conditons

Expected time r for random inputs

Counting space complexity in recursion

Counting variable and data structures

Binary: O(logn)

Time complexity in sequential operations

Dijkstra's: O(V^2)

Merge: O(nlogn)

What is recurrence relations?

Tail Recursion: Optimized by some languages to use constant space.

Master theorem

Counting iterations in nested loops

Counting iterations in single loops

Linear search if target is not present

Linear search if the first element is target

In-place Sorting Algorithms: Heap sort (O(1) space complexity).

Quick sort with random pivots

Heap Sort:

Quick Sort:

Recursion trees

Adding time complexities of consecutive statements

Examples of recurrence relations

Best Case: Example of bubble sort’s best case (already sorted array, O(n)).

Worst Case: Highlight how quick sort degrades to O(n^2) with poor pivot selection.

Average Case: For quick sort, it might help to mention that it is O(n log n) in average cases.

Time complexity of single loops

Time complexity of nested loops

Tail Recursion: Mention if a language optimizes tail-recursive functions to use O(1) space.

O(n) space for recursion due to stack frames (for algorithms like depth-first search).

O(n) recursion: Linear recursion like factorial or Fibonacci (without memoization).

O(log n) recursion: Binary search in a recursive form.

Single loop: O(n) (e.g., iterating through an array)

Nested loops: O(n^2) (e.g., comparing all pairs in an array)

INTRODUCTION HERE

An awesome title

Describe the problem you’re going to resolve and, above all, the reason why your idea is interesting
cmap 1: concept

Main menu

Stacks

asks the question of

how

when

where

what

why

leading to the

can this be used in

explaining

it is used in

can this be applied with the

operations

process

applications

definition

structure

advantages

disadvantages

Problems solved

scenarios

including a

for

with

including

having a(n)

such as

such as

step 1

step 2

step 3

back tracking & Depth-first search

undo operation(last element added is needed first)

fast insertion/deletion

potential overflow if too many pushes

top element

MEMORY MANAGMENT

Data structure

fUNCTION CALL

function cALL MANAGEMENT

linked nodes/array

bROWSER hISTORY

initialize empty stack

isEmpty

POP

push

PEEK (Top)

execute operations steps

UNDO MECHANISMS

syntax parsing

EXPRESSION EVALUATION

order preservation

expression evaluation

siZE LIMITATION

manage stack pointer/size

that utilizes the

last in first out (LifO) Principle

go to implementation

cmap 2: implementation

Main menu

Stacks

has operations such as

iSEMPTY

pEAK

pOP

PUSH

encompassing a

encompassing a

encompassing a

encompassing a

Linked list-BASED

Array-based

Linked list-BASED

Linked list-BASED

Array-based

Array-based

Linked list-BASED

Array-based

following

such as

following

such as

following

such as

following

such as

Check if stack is empty

Check if stack is empty

Check if stack is empty

Check if stack is empty

Check if stack is empty

Check if stack is empty

Create a new node

Check if stack is full

Check if top is -1

Check if top is -1

Check if top is -1

Check if head is null

Check if head is null

Compare top with max - 1

Check if head is null

Assign data to node

Retrieve element at top

Return element at top

store data from head

Return data from head node

Add element at top positions

Point node's next to current head

Decrement top

Increment top

update head to next node

Update head to new node

delete old head node

INTRODUCTION HERE

An awesome title

Describe the problem you’re going to resolve and, above all, the reason why your idea is interesting
cmap 1: concept

Main menu

Queues

asks the question of

how

when

where

what

why

leading to the

can this be used in

it is used in

can this be applied with the

explaining

operations

process

applications

definition

structure

scenarios

Types

Use cases

Limitations

including a(n)

for

having a(n)

with

such as

including a(n)

step 1

step 2

including

Not suitable for random access

elements added at one end

CUSTOMER SERVICE SYSTEMS

Operating systems

Linear data structure

unknown dataset size

Linear collection

frequent insertionor deletion

initialization

execute operations steps

DEQUEUE

ENQUEUE

scheduling

WEB SERVERS

ISEMPTY

FRONT

Simple

Elements removed at other end

Buffering

To manage requests, where the first request made should be the first oneserved.

Queues manage tasks in CPU scheduling and handling print jobs.

PRINT

Where customers are served in the order they arrive (like a line at the bank).

not sutiable Priority order is needed

Circular

that have key characteristics of

Handling async data

Priority

First In,First Out (FIFO) principle

Double ended

go to implementation

cmap 2: implementation

Main menu

Queues

has operations such as

peek

dequeue

ENQUEUE

encompassing a

encompassing a

encompassing a

Circular

sIMPLE

pRIORITY

Deque

Circular

Circular

sIMPLE

sIMPLE

pRIORITY

pRIORITY

Deque

Deque

such as

following

such as

following

such as

following

such as

following

such as

such as

following

following

determine priority based on context

print front index

ensures the rear points to the front

print front index

determine priority based on context

determine priority based on context

Adding at front

remove at front

ensures the rear points to the front

ensures the rear points to the front

Where to add a new element?

where to remove element?

print based on given priority

print rear index

insert based on given priority

remove based on given priority

print front in circular

Adding at rear

remove at rear

Adds an element to the rear

remove an element in the front

increment rear in circular

increment front in circular

increment the front poitner

increment the rear pointer

INTRODUCTION HERE

An awesome title

Describe the problem you’re going to resolve and, above all, the reason why your idea is interesting
cmap 1: concept

Main menu

Hashing

asks the question of

how

when

where

what

why

leading to the

can this be used in

explaining

it is used in

can this be applied with the

operations

process

applications

definition

problems solved

structure

advantages

disadvantages

scenarios

capacity

including a(n)

for

with

including

including

having a(n)

such as

step 1

step 2

Collision Handling

Collisions: Keys may hash to the same index, requiring extra work.

unknown dataset size

dynamic data structures

initialization

Hash table

execute operations steps

deletion

insertion

frequent insertionor deletion

Hash function

Dynamic Resizing: Hash table can grow/shrink based on load factor.

Fast Lookups: O(1) on average for insertion, deletion, search

BUCKET

LOAD FACTOR

access

search

high-speed lookups

Ratio of filled slots to total capacity; used to decide resizing.

resize

Array-like structure where the hash function directs elements.

Total number of slots in the hash table.

Slot in the table to hold elements.

Maps a key to an index in the hash table.

caches

Collisions Solved: Collisions are handled using chaining or open addressing.

Efficient Retrieval: Overcomes the slow search time of linear searches.

such as

Resizing Overhead: Rehashing all elements during resizing is time-consuming.

Database indexing

hash sets

hash maps

hash tables

go to implementation

cmap 2: implementation

Main menu

Hashing

has operations such as

resize

insertion

print

search

deletion

access

encompassing a

encompassing a

encompassing a

encompassing a

encompassing a

encompassing a

question

process

process

process

question

question

question

process

question

process

question

process

such as

following

such as

following

such as

following

such as

such as

following

following

following

such as

Access Steps

Resize Steps

Print Steps

Print entire array or up to index?

What is the New Capacity?

What Index Do You Want?

Deletion Location

Search Location

Insertion Location

Where to Delete?

Where to Search?

Where to Insert?

How should the output be formatted?

Which Element to Search For?

Which Element to Delete?

Is the Index In Bounds?

What is the Current Capacity?

Is There Capacity?

Resize Array Now or Until Necessary?

How to Shift Elements?

to decide

to decide

to decide

INSERTION

DELETION

SEARCH

step 1

step 1

step 1

step 1

step 1

step 1

Validate index using hash function

Compute index using hash function

Determine new capacity based on load factor

Compute index using hash function

Compute index using hash function

Iterate over all buckets

step 2

step 2

step 2

step 2

step 2

step 2

Allocate new table

Print elements in each bucket

Check if index is free

Check bucket for element

Locate element at index

Return element at index, handle collision if needed

step 3

step 3

step 3

step 3

step 3

Rehash all elements

Handle chaining if needed

Handle collision if needed

Handle collision if needed (chaining/open addressing)

Insert if free, else handle collision

step 4

step 4

step 4

step 4

step 4

Remove element or mark as deleted

Adjust load factor and resize if needed

Return element or indicate not found

Format output

Free old memory

step 5

Adjust load factor and resize if necessary

INTRODUCTION HERE

An awesome title

Describe the problem you’re going to resolve and, above all, the reason why your idea is interesting
cmap 1: concept

Main menu

Heaps

asks the question of

how

when

where

what

why

leading to the

can this be used in

explaining

it is used in

can this be applied with the

operations

process

applications

definition

structure

USE CASES

LIMITATIONS

scenarios

Types

including a(n)

for

with

including

having a(n)

including a(n)

such as

step 1

step 2

min heap

Fixed Structure

special type of binary tree

Binary tree

dynamic data structures

unknown dataset size

frequent insertionor deletion

initialization

Priority Queues

searchalgorithms

execute operations steps

deletion

sortingalgorithms

insertion

Parent node>child node

Max heap

access

search

HEAP SORT

Not Ideal for Searching Arbitrary Elements

resize

such as

such as

that have key characteristics of

such as

graph algoS

Max heap

High Memory Overhead

Min Heap

insertionsort

quick selection

binarch search

dynamic graphs

linked lists

bubble sort

The parent node is always greater than orequal to its children.

The parent node is always less than or equalto its children.

linear search

hash tables

quicksort

selection sort

stacks

queues

mergesort

heapsort

the largest or the smallest is at the root

bucket sort

go to implementation

cmap 2: implementation

Main menu

Heaps

has operations such as

Heapify

resize

insertion

Peak

deletion

encompassing a

encompassing a

encompassing a

encompassing a

encompassing a

process

process

process

process

process

following

following

following

following

following

Replace the root with the last element in the heap

Access the root element

Start from the last non-leaf node

Remove the root element

Add the new element at the end of the heap

(for min-heap, this is the smallest; for max-heap, it’s the largest)

apply the heap property recursively

smallest in min-heap, largest in max-heap

Remove the last element

maintains a complete binary tree

Move elements down the tree until the subtree satisfies the heap property.

Replace it with the last element in the heap, then bubble down as in delete

Bubble down the replaced root to restore the heap property

Bubble up the element by comparing it to its parent

Constant time (O(1)) since the root element is always at the top

by swapping with its smallest (min-heap) or largest (max-heap) child until correct.

Swap as needed until the heap property is restored (min-heap or max-heap)

Efficient way to build a heap from an unsorted array (O(n) complexity).

Efficiently returns the min/max value while maintaining the heap structure.

No modification to the heap, only read access.

INTRODUCTION HERE

An awesome title

Describe the problem you’re going to resolve and, above all, the reason why your idea is interesting
cmap 1: concept

Main menu

Trees

asks the question of

what

how

when

where

why

leading to the

can this be used in

explaining

it is used in

can this be applied with the

operations

process

applications

definition

problems solved

structure

advantages

disadvantages

scenarios

including a(n)

for

with

including

including

having a(n)

such as

step 1

step 2

maintain parent-child relationship

natural hierarchy

hierarchical, non-linear data structure

binary search trees (bsts)

storing hierarchical relationships (e.g., xml/html)

complex balancing

avltrees

branch

decision making

organizing hierarchial data (e.g., file systems)

execute operations steps

deletion

insertion

root

edge

leaves

nodes with or without children

efficient searching, insertion, deletion

traversal

search

potentialskew

efficiently searching or sorting large datasets

heaps

b-trees

go to implementation

define tree node structure

with step 1

cmap 2: implementation

recursive process

create recursive traversal methods

Main menu

Trees

with step 2

canhave a

handle base case (leaf node)

with step 3

recurse on child nodes

with step 4

combine results from subtrees

with step 5

has operations such as

insertion

search

traversal

deletion

encompassing a

encompassing a

encompassing a

encompassing a

process

question

process

question

question

process

process

question

following

such as

following

such as

following

such as

such as

following

STEP 4

Where to Insert the New Node?

STEP 1

STEP 4

In what order to visit nodes?

STEP 1

How do we find a specific node?

STEP 3

STEP 1

How to remove while maintaining structure?

STEP 2

STEP 1

STEP 5

STEP 3

STEP 3

STEP 3

STEP 2

STEP 2

STEP 2

STEP 4

as

as

as

as

as

as

as

as

as

as

as

as

as

as

as

as

Find node to delete

Start atRoot

If node is not found, return false or doesn't exist.

Check the node's children

Compare new node's value with current node

Continue until the value is found or you reach a null/empty node.

Compare target value with current node

Update tree's balance (if required, like in AVL)

Repeat Step 2 until an empty spot is found

Choose Traversal Type

Update tree's balance (if required, like in AVL)

Start at Root

Recursively apply chosen order

Process each node during visit

Start atRoot

if no children

if one child

if two children

if smaller

if larger

Insert new node at the empty position

Left, Root, Right

Root, Left, Right

Left, Right, Root

if values match

if target is smaller

if target is larger

Find In-Order Successor (smallest node in right subtree)

Replace Node's Value with Successor's Value

Remove Node

Move Left

Move Right

In-Order

Post-Order

Pre-Order

Node is Found

Move Right

Move Left

Replace Node's Value with Successor's Value

Delete the In-Order Successor

To Balance: After an insert or delete, calculate Balance Factor = Height of left subtree - Height of right subtree. If the balance factor is out of range (e.g., >1 or <-1 for AVL): Perform rotation(s): Single Rotation: Left or Right. Double Rotation: Left-Right or Right-Left. Update the heights and continue balancing if necessary.

INTRODUCTION HERE

An awesome title

Describe the problem you’re going to resolve and, above all, the reason why your idea is interesting
cmap 1: concept

Main menu

Graphs

asks the question of

how

where

when

why

what

leading to the

can this be used in

explaining

it is used in

can this be applied with the

operations

process

applications

problems solved

structure

advantages

disadvantages

scenarios

Adjacency List

including a(n)

for

with

including

including

having a(n)

such as

step 1

step 2

resizing operations new array

graph representation

VERTICES

Traversal (DFS/BFS)

CYCLE DETECTION

Adjacency Matrix

Shortest Path (Dijkstra’s Algorithm)

Dependency Tracking

Modeling Networks

Edges

Minimum Spanning Tree (Kruskal's or Prim's)

Efficient for Pathfinding

Models Real-World Relationships

Pathfinding

Adjacency Matrix

Adjacency List

unweighted graph

Optimized Network Routing

Computer Networks

Recommendation Systems

Task Scheduling

Directed Graph

Social Networks

Dependency Resolution

Modeling Connections

weighted graph

Undirected Graph

go to implementation

cmap 2: implementation

Main menu

Graphs

has operations such as

Graph Representation

traversal (dfs/bfs)

Cycle Detection

Minimum Spanning Tree (Kruskal's or Prim's)

Topological Sorting (for DAG)

Shortest Path (Dijkstra’s Algorithm)

encompassing a

encompassing a

encompassing a

encompassing a

encompassing a

encompassing a

question

question

process

process

process

question

question

process

question

process

question

process

such as

Step 1: Choose the algorithm (Kruskal's or Prim's) Step 2: Add edges or vertices to the MST based on the algorithm

following

such as

following

such as

following

such as

such as

following

following

following

such as

Step 1: Choose adjacency matrix or adjacency list Step 2: Implement the chosen representation

Which representation to use: adjacency matrix or adjacency list?

Step 1: Select starting vertex Step 2: Choose DFS or BFS

What vertex or index to check for a cycle?

Step 1: Initialize distances (source = 0, others = infinity)

Which algorithm: Kruskal's or Prim's?

What is the current capacity for sorting?

What is the source vertex?

Is the graph directed or undirected?

Access Steps

Where to start traversal?

How should the output be formatted?

Where to search for the MST?

Is the graph a DAG?

Step 2: Mark all nodes as unvisited

Which nodes require shortest paths?

Which method to use :DFS or BFS

Step 3: Set source vertex as current

Adjacency Matrix

Adjacency List

to decide

to decide

Step 1: Perform DFS from each unvisited vertex Step 2: Add vertices to the sort order after visiting all adjacent vertices Step 3: Reverse the post-order traversal to get the topological sort

DFS Process

to decide

BFS Process

Kruskal's Algorithm

Prim's Algorithm

Dijkstra’s Process

step 1

step 1

step 1

step 1

step 1

step 1

step 1

Step 1: Validate the vertex index (if necessary)

Start at root vertex

Start DFS from any unvisited vertex

Sort edges by weight

Start at root vertex

Start at any vertex

step 1

Create a 2D array (V x V)

Calculate tentative distances to neighboring nodes

step 2

step 2

step 2

step 2

step 2

step 2

step 2

Step 2: Perform DFS or BFS based on the graph type

step 3

For each vertex: Have all adjacent vertices been visited?

For each edge (u, v): Is there an edge between u and v? Yes: Mark matrix at [u][v] (and [v][u] for undirected) No: Leave matrix entry as 0

Mark vertex as visited

Add smallest edge to MST

Add the smallest edge that connects a new vertex

Enqueue and mark root as visited

step 2

Repeat for all vertices

Update shortest distance for each neighbor

step 3

step 3

step 3

step 3

step 3

step 3

Repeat for all vertices

Yes: Add the vertex to the stack (post-order) No: Continue DFS for adjacent vertices

Dequeue vertex and visit all adjacent vertices

Recursively visit unvisited adjacent vertices

Mark the vertex as visited

Check for cycle

step 3

Mark the current vertex as visited

step 4

step 4

step 4

step 1

step 4

step 4

Create a list for each vertex

Enqueue all adjacent unvisited vertices

Repeat until all vertices are connected (V-1 edges)

Backtrack if no adjacent vertex exists

Once all vertices are visited, reverse the post-order stack

step 4

Repeat until all vertices are included in the MST

step 5

Move to the unvisited node with the smallest distance

step 5

step 5

step 2

step 5

Continue until all vertices are visited

For each edge (u, v): Is there an edge between u and v? Yes: Add vertex v to the list of vertex u (and vice versa for undirected)

Repeat until all nodes are visited

Return the reversed stack as the topological sort

Continue until queue is empty

INTRODUCTION HERE

An awesome title

Describe the problem you’re going to resolve and, above all, the reason why your idea is interesting

You did it! I'm proud of you. :)

What (Loops and Recursion): Loops: Single Loop: Counts the iterations in a single loop (O(n)). Nested Loop: Counts the iterations in nested loops, where each loop depends on the other (O(n²)). Consecutive Loops: Adds the time complexity of consecutive loops (O(n) + O(m)). Recursion: Identifying Recurrence Relations: Analyzes how recursive calls are made and how they affect time complexity. Solving Recurrence Relations: Uses the Master Theorem or recursion trees to solve and derive time complexity from recurrence relations. 2. Why (Understanding Time Complexity): Single and Nested Loops: Helps in determining how algorithms grow with input size. Recursion: Essential to understand how dividing problems into smaller subproblems affects time. 3. When (Use Cases): Loops: Used when counting iterations in algorithms like linear search or selection sort. Recursion: Useful in recursive algorithms such as divide-and-conquer strategies like Merge Sort or calculating Fibonacci numbers. 4. Where (Common Use Cases): Single Loop: Linear search or basic iteration through data. Nested Loop: Sorting algorithms like Bubble Sort that involve comparing elements in pairs. Recursion: Algorithms like Merge Sort and algorithms involving trees or graphs.

how?

- Explain the step-by-step processes involved.

20%
80%

what?

- Provide a definition and key characteristics. Syntax: - Define type int, char, float - Example: int arr[3] Initialization: - int arr[3] = {1, 2, 3} - Explain the fundamental principles. - Discuss the basic structure or components. - Include any related terminology and concepts.

what?

Definition: Example: "Big O notation is a mathematical notation that describes the upper bound of an algorithm's time complexity, providing a worst-case scenario of its performance." Focus: Time Complexity: Execution time (how the algorithm scales with input size). Space Complexity: Memory usage (how much memory the algorithm needs as the input size grows).

. What (Best, Average, and Worst Case Scenarios): Best Case: Minimum time required under optimal conditions. Average Case: Expected time for typical conditions (random inputs). Worst Case: Maximum time required under the worst possible conditions. 2. Why (Understanding Different Cases): Helps in assessing how well an algorithm performs not just in optimal conditions but also when inputs are random or adversarial. 3. When (Scenarios Apply): Best Case: Useful when input is already sorted (e.g., searching the first element). Average Case: Typical inputs (random or mixed data). Worst Case: Worst-case scenario where the input is in reverse order or leads to maximum complexity (e.g., searching the last element). 4. Where (Common Use Cases): Best Case: Linear search when the target is the first element. Average Case: Quick Sort with random pivots. Worst Case: Linear search when the target is not present or at the end. 5. How (Implementation of Different Cases): Best Case Example (Linear Search):

Use graphics in your presentation...

To highlight super relevant data. 90% of the information we assimilate comes through sight.

Deletion: 1. What (Definition of Deletion): Deletion removes a key-value pair from the hash table. 2. Why (Purpose of Deletion): Necessary to free up space and maintain the integrity of the table when keys are no longer needed. 3. When (When to Delete): Used when a key is no longer needed or must be updated. 4. Where (Use Cases): Databases and cache systems often delete stale or invalid data. 5. How (Implementation of Deletion): Compute the hash value of the key. Find the bucket where the key is stored. Remove the key-value pair. Handle any collisions that may arise.

Use graphics in your presentation...

To highlight super relevant data. 90% of the information we assimilate comes through sight.

1. What (Definition of Access): Access refers to the ability to retrieve data from the hash table based on the key. 2. Why (Purpose of Access): Important for reading the stored data for further operations. 3. When (When to Access): Access is performed when the value associated with a key needs to be read. 4. Where (Use Cases): Often used in caching, databases, and associative arrays. 5. How (Implementation of Access): Typically, access is part of the search operation. Code Example: Follows the same logic as searching.

What (Definition of Print): Print outputs the current state of the hash table, showing all key-value pairs. 2. Why (Purpose of Print): Useful for debugging and verifying the contents of the hash table. 3. When (When to Print): Typically used during development or when needing to log the current state of the table. 4. Where (Use Cases): Print operations are common in testing environments or debugging situations. 5. How (Implementation of Print): Iterate through all the buckets. Print all key-value pairs stored in each bucket.

when?

- Discuss the situations or conditions for its use.

why?

- Mention the advantages and benefits. - Explain the problem it solves or the need it addresses.

how?

- Explain the step-by-step processes involved.

Where?

Search Algorithms: Linear Search: O(n). Binary Search: O(log n). Sorting Algorithms: Bubble Sort: O(n²). Quick Sort: O(n log n) (average), O(n²) (worst case). Merge Sort: O(n log n). Graph Algorithms: Breadth-First Search (BFS): O(V + E), where V is vertices and E is edges. Dijkstra’s Algorithm: O(V²) or O((V + E) log V) with a priority queue.

when?

- Discuss the situations or conditions for its use.

What (Definition of Resize): Resize involves expanding the hash table and rehashing all existing elements when the load factor exceeds a threshold. 2. Why (Purpose of Resize): To maintain efficiency when the number of elements exceeds the table's capacity, avoiding excessive collisions. 3. When (When to Resize): When the load factor (number of elements divided by table size) exceeds a certain threshold (e.g., 0.7). 4. Where (Use Cases): Hash tables in dynamic applications, such as memory caches or dynamic associative arrays. 5. How (Implementation of Resize): Double the size of the table. Rehash all existing elements into the new table. Update the capacity.

1. What (Definition of Search): Search finds the value associated with a key in the hash table. 2. Why (Purpose of Search): Essential for retrieving stored data quickly, based on the key. 3. When (When to Search): Occurs when data needs to be accessed using a known key. 4. Where (Use Cases): Common in lookup tables, caches, and dictionaries. 5. How (Implementation of Search): Compute the hash value of the key. Check the bucket for the key. If found, return the value. If not found, return an indication of failure.

Use graphics in your presentation...

To highlight super relevant data. 90% of the information we assimilate comes through sight.

when?

Scenarios: Used when there is an unknown dataset size. Frequent insertion or deletion of elements is required.

Where?

Applications: Dynamic data structures such as: Linked Lists. Stacks. Queues. Hash Tables. Dynamic Graphs. Search Algorithms, such as: Quick Selection. Binary Search. Linear Search. Sorting Algorithms, such as: Bubble Sort. Quick Sort. Merge Sort. Bucket Sort. Insertion Sort. Selection Sort. Heap Sort.

how?

Operations: Insertion, Deletion, Access, Search, Resize: Key operations in hashing. Process: Step 1: Initialization. Step 2: Execute operations.

learn more

Other Resources to Practice:

  • https://www.geeksforgeeks.org/array-data-structure-guide/
  • https://www.geeksforgeeks.org/top-50-array-coding-problems-for-interviews/
  • https://www.geeksforgeeks.org/c-arrays/
  • https://leetcode.com/tag/array/

what?

Definition: Hashing is a collection of elements that: Have key characteristics of being accessed using indices. Utilize dynamic resizing. Are stored in contiguous blocks of memory. Structure: Underlying fixed-size array. Capacity variable (total allocated space). Size variable for the number of elements.

why?

Advantages: Efficient Random Access: O(1) time complexity for accessing elements. Problems Solved: Static Array Limitations: Handles issues with static arrays like: Fixed size. Simplified operations.

Where?

- Specify where it is used in the real world. - Highlight relevant contexts and scenarios.

Search Algorithms: Linear Search: O(n). Binary Search: O(log n). Sorting Algorithms: Bubble Sort: O(n²). Quick Sort: O(n log n) (average), O(n²) (worst case). Merge Sort: O(n log n). Graph Algorithms: Breadth-First Search (BFS): O(V + E), where V is vertices and E is edges. Dijkstra’s Algorithm: O(V²) or O((V + E) log V) with a priority queue.

why?

- Mention the advantages and benefits. - Explain the problem it solves or the need it addresses.

What (Common Algorithm Examples): Search Algorithms: Linear Search (O(n)): Check each element until the target is found. Binary Search (O(log n)): Divide the search space in half at each step, requires sorted data. Sorting Algorithms: Bubble Sort (O(n²)): Repeatedly swap adjacent elements if they are in the wrong order. Merge Sort (O(n log n)): Divide the array into halves, sort each half, and then merge them. Graph Algorithms: Breadth-First Search (BFS): O(V + E) where V is vertices and E is edges. Dijkstra’s Algorithm: O(V²) or O((V + E) log V) using a priority queue for shortest paths. 2. Why (Why These Examples Matter): Search Algorithms: Common for finding specific elements in datasets. Sorting Algorithms: Essential for arranging data in a particular order efficiently. Graph Algorithms: Critical for pathfinding, traversal, and optimization in network-like structures. 3. When (Use Cases for Each Example): Linear Search: Used when data is unsorted. Binary Search: Used when data is sorted. Bubble Sort: Simple but inefficient for large datasets. Merge Sort: Efficient for larger datasets. BFS: Used in shortest path problems. Dijkstra’s Algorithm: Optimal for weighted graph shortest paths. 4. Where (Common Use Cases): Search Algorithms: Used in databases and data retrieval systems. Sorting Algorithms: Used in ordering elements, especially in preprocessing for searching algorithms. Graph Algorithms: Used in network routing, social network analysis, and many real-world applications. 5. How (Implementation of Algorithms): Binary Search Example:

Where?

- Specify where it is used in the real world. - Highlight relevant contexts and scenarios.

how?

- Explain the step-by-step processes involved.

when?

Best Case Complexity: Definition: Minimum time required under optimal conditions. Example: Linear Search, when the first element is the target (O(1)). Worst Case Complexity: Definition: Maximum time required under the worst conditions. Example: Linear Search, when the target is the last element or not present (O(n)). Average Case Complexity: Definition: Expected runtime for random inputs. Example: Quick Sort with random pivots (O(n log n)).

What (Memory and Recursive Algorithms): Memory Allocation: Focuses on how much memory is allocated for variables and data structures. Recursive Algorithms: Includes the stack space used in recursion, which can grow based on recursion depth. Optimizing Space Usage: Techniques like using in-place algorithms to reduce space complexity. 2. Why (Memory Usage Consideration): Memory Allocation: Important for resource-constrained environments. Recursive Algorithms: Can lead to stack overflow if the recursion depth is too large. 3. When (Space Optimization Needed): Used when space is limited, or the algorithm has high memory requirements (e.g., deep recursion). 4. Where (Common Use Cases): Recursive algorithms (e.g., recursive tree traversal). In-place sorting algorithms (e.g., Quick Sort). 5. How (Implementation of Space Optimization): Recursive Example (Fibonacci with Memorization to reduce space)

1. What (Definition of Insertion): Insertion is the process of adding a key-value pair into a hash table. 2. Why (Purpose of Insertion): The insertion operation is critical for storing data in the hash table efficiently, allowing for quick retrieval later. 3. When (When to Insert): Insertions occur when a new key-value pair needs to be stored or when resizing requires rehashing of elements. 4. Where (Use Cases): Hash tables are used in databases, caches, and associative arrays (dictionaries), where fast insertion is essential. 5. How (Implementation of Insertion): Compute the hash value of the key. Check if the corresponding bucket is empty. If the bucket is empty, insert the key-value pair. If there's a collision, handle it using techniques like separate chaining or open addressing.

what?

- Provide a definition and key characteristics. Syntax: - Define type int, char, float - Example: int arr[3] Initialization: - int arr[3] = {1, 2, 3} - Explain the fundamental principles. - Discuss the basic structure or components. - Include any related terminology and concepts.

Purpose: Provides a worst-case scenario for algorithm performance. Helps compare the efficiency of algorithms. Types of Complexity Classes: Constant Time (O(1)): The execution time remains constant, regardless of input size. Example: Accessing an element in an array by index. Logarithmic Time (O(log n)): Execution time increases logarithmically with input size. Example: Binary search in a sorted array. Linear Time (O(n)): Execution time grows linearly with input size. Example: Iterating through an entire array. Linearithmic Time (O(n log n)): Execution time grows linearly, multiplied by a logarithmic factor. Example: Efficient sorting algorithms like Merge Sort. Quadratic Time (O(n²)): Execution time grows proportionally to the square of the input size. Example: Nested loops such as in Bubble Sort. Cubic Time (O(n³)): Execution time grows proportionally to the cube of the input size. Example: Matrix operations with three nested loops. Exponential Time (O(2^n)): Execution time doubles with each addition to the input size. Example: Recursive algorithms like the Towers of Hanoi. Factorial Time (O(n!)): Execution time grows factorially with the input size. Example: Permutations of a set, such as the Traveling Salesman Problem.

why?