marq

Dr. Charles Simonyi is the Father of Modern Microsoft Excel                                           JavaScript was originally developed by Brendan Eich of Netscape under the name Mocha, later LiveScript, and finally renamed to JavaScript.                                           The word "Biology" is firstly used by Lamarck and Treviranus                                           Hippocrates (460-370 bc) is known as father of medicine.                                           Galene, 130-200 is known as father of Experimental Physology                                           Aristotle (384-322 BC) is known as Father of Zoology because he wrote the construction and behavior of different animals in his book "Historia animalium"                                           Theophrastus(370-285 BC) is known as father of Botany because he wrote about 500 different plants in his book "Historia Plantarum".                                           John Resig is known as Father of Jquery -                                          HTML is a markup language which is use to design web pages. It was invented in 1990 by Tim Berners-Lee.                                                                The Google was founded by Larry Page and Sergey Brin.                                                                Rasmus Lerdorf was the original creator of PHP. It was first released in 1995.                                                               Facebook was founded by Mark Zuckerberg                                                               Bjarne Stroustrup, creator of C++.                                                                Dennis Ritchie creator of C                                                                                                                              James Gosling, also known as the "Father of Java"                                          At 11.44%, Bihar is India's fastest growing state                                          Father of HTML -Tim Berners Lee                                          orkut was created by Orkut Büyükkökten, a Turkish software engineer                    Photoshop: It came about after Thomas Knoll, a PhD student at the University of Michigan created a program to display grayscale images on a monochrome monitor which at the time was called 'Display'.

Data Structures


Why we study data structure and algorithm ?

If you are referring to computers then a data structure in computer science is a way of storing data in a computer so that it can be used efficiently. It is an organization of mathematical and logical concepts of data. Often a carefully chosen data structure will allow the most efficient algorithm to be used. The choice of the data structure often begins from the choice of an abstract data type. A well-designed data structure allows a variety of critical operations to be performed, using as few resources, both execution time and memory space, as possible. Data structures are implemented by a programming language as data types and the references and operations they provide.




About pointer

=====================================================================

Pointer and Structure

====================================================================



Introduction of Data Structure

The organised collection of data is called data structure .
data structure=collection of data + allowed operation

Classification of data structure

1)linear and Non-linear
2)Static and Dynamic data structure
3) Homogeneous and Non Homogeneous 


Linear Data Structure:

Linear data structure is linear if element is adjacent to each other. It has exactly two 
neighbors elements to which it is connected as its previous and next member.


Example of Linear Data Structure



  • Array
  • Linked List
  • Stack
  • Queue


Non- Linear Data Structure

Non-Linear data structure is that if one element can be connected to more than two
adjacent element then it is known as non-linear data structure.




Example of Linear Data Structure:

  • Tree
  • Graph



Static and Dynamic Data Structure:

In static data structure the size and structure associated memory location are fixed at compile time e.g array

In dynamic  data structure its size expend during the program execution  and there associated memory location changed. e.g link list.



Homogeneous and Non-homogeneous structure

An array is a homogeneous structure which all elements are same type.

An array is a non homogeneous structure which has diff type of elements.



====================================================================

====================================================================

Array Data Inserting and Deleting  

====================================================================

Array Program to Find Maximum element

====================================================================
====================================================================

Inverse of array


====================================================================

STRING 
====================================================================





Leaner Search 


====================================================================



Program to Reverse and Array



====================================================================


Merge of two array in c


====================================================================
Link list                      Link list in hindi


====================================================================


Stack         Stack in Hindi


=====================================================================
Queue


====================================================================
Tree Data Structure


====================================================================










Data Structure in C

A memory play a vital role in c because all variable you declare are at first stored in the memory. Some memory allocation is done at compile time and some at runtime. Memory which is allocated and de-allocated during the execution of the program is known as dynamic memory. Memory span and shrink as per application requirement and this is one of major advantage of pointer.

A memory can be categories in following segments


  1. Text Segment
  2. Data Segment
  3. Free Space
  4. Stack Segment

 Data Segment in sub-categories in two parts

Static Segment
Heap Segment

The Static Segment hold the global variables and the Heap Segment hold the dynamic variable. Dynamic memory allocation is done by using malloc() funtion in C.




All programmers should know something about basic data structures like stacks, queues and heaps. Graphs are a tremendously useful concept, and two-three trees solve a lot of problems inherent in more basic binary trees.

  1. Stack Data Structure
  2. The Queue Data Structure
  3. Heaps
  4. Hash Tables
  5. Graphs in computer science
  6. Two-three trees



The Stack Data Structure in C and C++


By Alex Allain The stack is a common data structure for representing things that need to maintained in a particular order. For instance, when a function calls another function, which in turn calls a third function, it's important that the third function return back to the second function rather than the first.




One way to think about this implementation is to think of functions as being stacked on top of each other; the last one added to the stack is the first one taken off. In this way, the data structure itself enforces the proper order of calls.

Conceptually, a stack is simple: a data structure that allows adding and removing elements in a particular order. Every time an element is added, it goes on the top of the stack; the only element that can be removed is the element that was at the top of the stack.

Consequently, a stack is said to have "first in last out" behavior (or "last in, first out"). The first item added to a stack will be the last item removed from a stack.

So what's the big deal? Where do stacks come into play? As you've already seen, stacks are a useful way to organize our thoughts about how functions are called. In fact, the "call stack" is the term used for the list of functions either executing or waiting for other functions to return.



Stacks have some useful terminology associated with them:


  • Push To add an element to the stack
  • Pop To remove an element from the stock
  • Peek To look at elements in the stack without removing them
  • LIFO Refers to the last in, first out behavior of the stack
  • FILO Equivalent to LIFO




The Queue Data Structure in C++

By Eric Suh
Queues are data structures that, like the stack, have restrictions on where you can add and remove elements. 

To understand a queue, think of a cafeteria line: the person at the front is served first, and people are added to the line at the back. Thus, the first person in line is served first, and the last person is served last. This can be abbreviated to First In, First Out (FIFO).


The cafeteria line is one type of queue. Queues are often used in programming networks, operating systems, and other situations in which many different processes must share resources such as CPU time.

One bit of terminology: the addition of an element to a queue is known as an enqueue, and removing an element from the queue is known as a dequeue.

Although the concept may be simple, programming a queue is not as simple as programming a stack. Let's go back to the example of the cafeteria line. Let's say one person leaves the line. Then what? Everyone in line must step forward, right? Now, imagine if only one person could move at a time. So, the second person steps forward to fill the space left by the first person, and then the third person steps forwards to fill the space left by the second person, and so on. Now imagine that no one can leave or be added to the line until everyone has stepped forward. You can see the line will move very slowly. It is not difficult to program a queue that works, but it is quite a proposition to make a queue that is fast!!

There are a couple of basic ways to implement a queue. The first is to just make an array and shift all the elements to accommodate enqueues and dequeues. This, as we said above, is slow.

The slowness of the previous method is that with many elements, the shifting takes time. Thus, another method, instead of shifting the elements, shifts the enqueue and dequeue points. Imagine that cafeteria line again. If the front of the line continually moves backwards as each person leaves the line, then the people don't need to step forward or backwards, which saves time.

Unfortunately, this method is much more complicated than the basic method. Instead of keeping track of just the enqueue point (the "end"), we also need to keep track of the dequeue point (the "front"). This all gets even more complicated when we realize that after a bunch of enqueues and dequeues, the line will need to wrap around the end of the array. Think of the cafeteria line. As people enter and leave the line, the line moves farther and farther backwards, and eventually it will circle the entire cafeteria and end up at its original location.

The best way to understand all of this is to read some source code. The example source code uses the second method, and is commented to help you understand. Try working through the code on paper.



What is a heap?


By Alex Allain A heap is a partially sorted binary tree. Although a heap is not completely in order, it conforms to a sorting principle: every node has a value less (for the sake of simplicity, we will assume that all orderings are from least to greatest) than either of its children. Additionally, a heap is a "complete tree" -- a complete tree is one in which there are no gaps between leaves. For instance, a tree with a root node that has only one child must have its child as the left node. More precisely, a complete tree is one that has every level filled in before adding a node to the next level, and one that has the nodes in a given level filled in from left to right, with no breaks.



Why use a heap?

A heap can be thought of as a priority queue; the most important node will always be at the top, and when removed, its replacement will be the most important. This can be useful when coding algorithms that require certain things to processed in a complete order, but when you don't want to perform a full sort or need to know anything about the rest of the nodes. For instance, a well-known algorithm for finding the shortest distance between nodes in a graph, Dijkstra's Algorithm, can be optimized by using a priority queue.

Heaps can also be used to sort data. A heap sort is O(nlogn) efficiency, though it is not the fastest possible sorting algorithm. Check out this tutorial heap sort for more information related to heap sort.

How do you implement a heap?

Although the concept of a heap is simple, the actual implementation can appear tricky. How do you remove the root node and still ensure that it is eventually replaced by the correct node? How do you add a new node to a heap and ensure that it is moved into the proper spot?

The answers to these questions are more straight forward than meets the eye, but to understand the process, let's first take a look at two operations that are used for adding and removing nodes from a heap: upheaping and downheaping.

Upheap: The upheap process is used to add a node to a heap. When you upheap a node, you compare its value to its parent node; if its value is less than its parent node, then you switch the two nodes and continue the process. Otherwise the condition is met that the parent node is less than the child node, and so you can stop the process. Once you find a parent node that is less than the node being upheaped, you know that the heap is correct--the node being upheaped is greater than its parent, and its parent is greater than its own parent, all the way up to the root.

Downheap: The downheap process is similar to the upheaping process. When you downheap a node, you compare its value with its two children. If the node is less than both of its children, it remains in place; otherwise, if it is greater than one or both of its children, then you switch it with the child of lowest value, thereby ensuring that of the three nodes being compared, the new parent node is lowest. Of course, you cannot be assured that the node being downheaped is in its proper position -- it may be greater than one or both of its new children; the downheap process must be repeated until the node is less than both of its children.

When you add a new node to a heap, you add it to the rightmost unoccupied leaf on the lowest level. Then you upheap that node until it has reached its proper position. In this way, the heap's order is maintained and the heap remains a complete tree.

Removing the root node from a heap is almost as simple: when you take the node out of the tree, you replace it with "last" node in the tree: the node on the last level and rightmost on that level.

Once the top node has been replaced, you downheap the node that was moved until it reaches its proper position. As usual, the result will be a proper heap, as it will be complete, and even if the node in the last position happens to be the greatest node in the entire heap, it will do no worse than end up back where it started.

Efficiency of a heap

Whenever you work with a heap, most of the time taken by the algorithm will be in upheaping and downheaping. As it happens, the maximum number of levels of a complete tree is log(n)+1, where n is the number of nodes in the tree. Because upheap or downheap moves an element from one level to another, the order of adding to or removing from a heap is O(logn), as you can make switches only log(n) times, or one less time than the number of levels in the tree (consider that a two level tree can have only one switch).


Hash Tables

By Eric Suh
Hash tables are an efficient implementation of a keyed array data structure, a structure sometimes known as an associative array or map. If you're working in C++, you can take advantage of the STL map container for keyed arrays implemented using binary trees, but this article will give you some of the theory behind how a hash table works.


Keyed Arrays vs. Indexed Arrays

One of the biggest drawbacks to a language like C is that there are no keyed arrays. In a normal C array (also called an indexed array), the only way to access an element would be through its index number. To find element 50 of an array named "employees" you have to access it like this:

1
employees[50];
In a keyed array, however, you would be able to associate each element with a "key," which can be anything from a name to a product model number. So, if you have a keyed array of employee records, you could access the record of employee "John Brown" like this:

1
employees["Brown, John"];
One basic form of a keyed array is called the hash table. In a hash table, a key is used to find an element instead of an index number. Since the hash table has to be coded using an indexed array, there has to be some way of transforming a key to an index number. That way is called the hashing function.

Hashing Functions

A hashing function can be just about anything. How the hashing function is actually coded depends on the situation, but generally the hashing function should return a value based on a key and the size of the array the hashing table is built on. Also, one important thing that is sometimes overlooked is that a hashing function has to return the same value every time it is given the same key.

Let's say you wanted to organize a list of about 200 addresses by people's last names. A hash table would be ideal for this sort of thing, so that you can access the records with the people's last names as the keys.

First, we have to determine the size of the array we're using. Let's use a 260 element array so that there can be an average of about 10 element spaces per letter of the alphabet.>

Now, we have to make a hashing function. First, let's create a relationship between letters and numbers:

A --> 0
B --> 1
C --> 2
D --> 3
...
and so on until Z --> 25.
The easiest way to organize the hash table would be based on the first letter of the last name.

Since we have 260 elements, we can multiply the first letter of the last name by 10. So, when a key like "Smith" is given, the key would be transformed to the index 180 (S is the 19 letter of the alphabet, so S --> 18, and 18 * 10 = 180).

Since we use a simple function to generate an index number quickly, and we use the fact that the index number can be used to access an element directly, a hash table's access time is quite small. A linked list of keys and elements wouldn't be nearly as fast, since you would have to search through every single key-element pair.

Collisions and Collision Handling

Problems, of course, arise when we have last names with the same first letter. So "Webster" and "Whitney" would correspond to the same index number, 22. A situation like this when two keys get sent to the same location in the array is called a collision. If you're trying to insert an element, you might find that the space is already filled by a different one.

Of course, you might try to just make a huge array and thus make it almost impossible for collisions to happen, but then that defeats the purpose of using a hash table. One of the advantages of the hash table is that it is both fast and small.

Collision handling with open addressing

The simplest collision handling algorithm is known as the open address method or the closed hashing method. When you are adding an element, say "Whitney," and you find that another element is already there ("Webster," for instance) then you would just proceed to the next element space (the one after "Webster"). If that is filled, you go on to the next one, and so on, until you find an empty space to insert the new element (all those extra elements came in handy after all!)

...
220 "White"   | <-- ### COLLISION ### : Gotta move on to the next.
221 "Webster" | <-- ### COLLISION ### : Next one.
222           | Ahhh, perfect. Insert Here.
223           |
...
Since we modified the insertion algorithm, we also have to change the function that finds the element. You have to have some way of verifying that you've found the element you want, and not some other element. The simplest way is to just compare keys. (Does this record have the last name "Whitney"? Does this one?) If the element you find is not one of them, just move on to the next element until you reach the one you want or you find an empty space (which means the element is not in the table).
Sounds simple, right? Well, it gets more complicated. What if you have so many collisions that you run off the end of the array?

If you're trying to insert "Zorba" and all the elements are filled because of the collision handling, then what? Look at the example:

...
258 "Whitney"   | <-- Nope, not Empty
259 "Zeno"      | Nope, not Empty
----------------  <-- Ummm, what now?

The easiest thing to do is to just wrap around to the beginning again. If there are still no empty spaces, then we have to resize the array, since there isn't enough space in the hash table for all of the elements. If we resize the array, of course, we'll have to come up with a tweak to our hash function (or at least how we handle it) so that it covers the right range of values again, but at least we'll have room. (Note that resizing the array means that occasionally inserting a value into the list will cause an O(n) copy operation to take place, but that on average this should happen only once for every n items inserted, so insertion should be on average constant time, O(1). (If you aren't sure what terms like "O(n") and "constant time" mean, take a look at the Cprogramming.com article series on algorithmic efficiency.) As you can see, resizing isn't all that bad--still, if you know the amount of space you will need to start with, you can save your program some work.

Handling collisions with separate chaining

A second collision handling strategy is to store a linked list at each element in the hash data structure. This way, when a collision occurs, you can just add the element into the linked list that is stored at the hash index. If you have only a single element with a particular hash value, then you have a single element list--no performance penalty. If you have a lot of elements hashing to the same value, you'll see a slowdown of course, but no more than you otherwise would see with hash collisions.

One nice thing about separate chaining is that having a bunch of values that hash "near" each other is less important. With open addressing, if you have a cluster of values that hash to nearly the same value, you'll run out of open space in that part of the hash. With separate chaining, each element that has a different hash value will not impact the other elements.

Resizing dynamically based on a load factor

Generally speaking, you wouldn't want your hash table to grow completely full because this will make lookups take much longer. If a value isn't in the array, with open addressing, you have to keep looking until you hit an empty location or you get back to the starting point--in other words, with a completely full table, lookups could be O(n), which is horrible. A real hash table implementation will keep track of its load factor, the ratio of elements to array size. If you have a 10 element array, with 7 elements, the load factor is 0.7. In fact, 0.7 is generally about the right time to resize the underlying array.

Choosing a Good Hash Algorithm

The more collisions you have, the worse the performance of your hash table will be. With enough elements in your hash table, you can get an average performance that's quite good--essentially constant time O(1). (The trick is to make the array grow over time as you start to fill up the array.) But if you have a lot of elements that hash to the same value, then you will have to start doing a lookup through a list of elements that all have the same hash value. This can make your hash lookups go from constant time to being, well, linear time in the number of elements. Imagine if your hash function hashed all values to 0, putting them in the first element of the array. Then it would be just a really complicated way of implementing a linear search.

Choosing a good hash algorithm can require some care and experimentation, and it will depend on your problem domain. If you're working with names, you probably don't want a hash algorithm that just looks at the first letter, because the letters of the alphabet are not used evenly--you'll find a lot more names that start with S than with Z. You also want to have your hash functions be fast--you don't want to lose all the time savings you're getting from the hash table because you're computing the hash function really slowly. It's a delicate balance. For one good hash function, check out this hash algorithm.

Now you're ready to implement your first hash table! Give it a try. It isn't too hard, and the end result is quite useful.



Graphs and graph theory in computer science


By Alex Allain

What is a graph?

A graph is a way of representing connections between places. Mathematically, a graph is a collection of nodes and edges. Nodes are locations that are connected together by the edges of the graph. For instance, if you had two small towns connected by a two-way road, you could represent this as a graph with two nodes, each node representing a town, and one edge, the road, connecting the two towns together. In addition to the undirected graph, in which the edge is a two-way connection, there are directed graphs, in which edges connect only one way. For instance, you could represent the previous example of two cities connected by a road as a directed graph consisting of two nodes and two edges, each edge connecting one of the nodes to the other. In the city example, it may also be convenient to record the distance between the two cities; this can be expressed by adding a 'weight' to an edge, which is a number that usually corresponds to the distance covered by an edge (the distance between two nodes).




Why are graphs useful?

Computer scientists have developed a great deal of theory about graphs and operations on them. One reason for this is because graphs can be used to represent many problems in computer science that are otherwise abstract. Finding a way to represent the solution to a problem as a graph can present new approaches to solving the problem or even lead directly to a solution derived from graph theory. This sort of technique is often used when discussing algorithmic efficiency and when trying to prove that a certain algorithm is NP-Complete; because many problems involving graphs, such as finding the shortest path to traverse all nodes (the Traveling Salesman Problem), are NP-Complete, if you can find a way to represent a problem as a graph and show that it is analogous to one of the other NP-Complete problems, then you can show the problem you are trying to solve is also NP-Complete, which gives you a hint that the solution will take a great deal of time.

Another reason for using graphs is that many problems computers are used to solve involve representing relationships between objects, places, or concepts. Because graphs can be either directed or undirected, they are a flexible method of showing connections. For instance, you can describe who knows whom in a room as a collection of nodes, each representing a person, and directed edges, each representing that one person knows another.

Because graphs are so often used and because they allow the representation of many problems in computer science, such as the Traveling Salesman Problem or something as simple as the relationships between people in a room, they are a convenient means of expressing problems with which many people are comfortable. This familiarity simplifies the process of creating mental models of problems, which ultimately leads to better problem solving.

What are some important graph theory terms?

Directed Graph: A directed graph is one in which edges connect nodes in only one direction.

Undirected Graph: An undirected graph is one in which edges connect nodes bidirectionally (in both directions). Node: A node, usually drawn as a circle, represents an item that can be related to other items or nodes. Nodes are sometimes referred to as vertices.

Edge: An edge represents a connection between nodes. Edges can be either directed or undirected, depending on the type of graph. Edges can also have weights, which may correspond to strength of relationship or distance between edges. (For instance, if a graph represents a map, then the weights of each edge will represent the distance between two nodes.)

Connected: A graph is connected if from any node you can reach any other node.

Disconnected: A graph is disconnected if certain groups of nodes form an island that has no connection to the rest of the graph.
Related Articles

Dijkstra's Algorithm for Finding Shortest Paths in Non-Negative Weight Graphs






2-3 Trees in C and C++

By Eric Suh
The 2-3 tree is also a search tree like the binary search tree, but this tree tries to solve the problem of the unbalanced tree.






Imagine that you have a binary tree to store your data. The worst possible case for the binary tree is that all of the data is entered in order. Then the tree would look like this:



This tree has basically turned into a linked list. This is definitely a problem, for with a tree unbalanced like this, all of the advantages of the binary search tree disappear: searching the tree is slow and cumbersome, and there is much wasted memory because of the empty left child pointers.

The 2-3 tree tries to solve this by using a different structure and slightly different adding and removing procedure to help keep the tree more or less balanced. The biggest drawback with the 2-3 tree is that it requires more storage space than the normal binary search tree.

The 2-3 tree is called such because the maximum possible number of children each node can have is either 2 or 3. This makes the tree a bit more complex, so I will try to explain as much as possible.

One big difference with the 2-3 tree is that each node can have up to two data fields. You can see the three children extending from between the two data fields.



Thus, the tree is set up in the following manner:

The node must always have a first field (data 1 in the diagram ), but not necessarily a second field (data 2). If both fields are present in a node, the first (or left) field of the node is always less than the second (or right) field of the node.
Each node can have up to three child nodes, but if there is only one data field in the node, the node cannot have more than two children.
The child nodes are set so that data in the first sub-tree is less than the first data field, the data of the second sub-tree is greater than the first data field and less than the second data field, and the data of the third sub-tree is greater than the second data field. If there is only one data field in the node, use the first and second children only.
All leaf nodes appear on the last level.
Now, take a look at the implementation of a 2-3 tree:

class twoThreeTree {
  public:
    twoThreeTree(void);      // Constructor
    ~twoThreeTree(void);     // Destructor


    void add(int item);      // Adds an item
    void search(int item);   //
    Searches for an item

  private:
    twoThreeNode *root;      // Pointer to root node

    // Private helper functions go here
};

class twoThreeNode {
  public:
    int firstData, secondData; // Two data fields

    // The three child nodes
    twoThreeNode *first, *second, *third;

    // This next one is quite useful. It aids
    // moving around the tree. It is a pointer
    // to the parent of the current node.
    twoThreeNode *parent;
};
You can see that this tree, unlike the binary search tree or the heap tree, has no remove() function. It is possible to program a remove function, but it generally isn't worth the time or the effort.

This tree will be a bit harder to implement than the binary search tree just because of the complexity of the node. Still, the add() function algorithm isn't that difficult, and a step-by-step progression through the algorithm helps enormously.

First, to add an item, find the place in the tree for it. This is very much the same as in the binary search tree: just compare and move down the correct link until leaf node is reached.

Once at the leaf node, there are three basic cases for the add() function:

The leaf node has only one data item: this case is very simple. Just insert the new item after this item or shift the old item forward and insert the new item before the old one, depending on the situation.

The leaf node is full and the parent node has only one data item: this case is also quite simple. Compare the three values: the two leaf node items and the new item. Choose the middle one and insert it in the parent node where appropriate.

If the leaf node is the left child, shift the old data in the parent node to the right and insert the middle value in the parent node before the old data. Make sure to shift the pointers to the children in the parent node! If the leaf is the right child, just insert the middle value to the right of the old value in the parent node. The two left-over values from the comparison become two leaf nodes, one as the second child, and one as the first or third child depending on the situation.

Both the leaf node and the parent node are full: this situation is the most complex of all. In the same manner as Case 2, promote the middle value of the leaf node. Then, continue doing the same thing with the parent node, and so on until the situation is resolved or the root node is reached. In this case, the middle value is promoted once again, except a new root node is created from the middle value and the two other values become the left and right subtrees of the new root. At the leaf node at which we began, the new level allows us to split the three initial values into a three node sub-tree of the parent node, and so on.
Doing a few small 2-3 trees by hand helps to understand this algorithm. Remember to check and hold to the rules governing the 2-3 tree! It can get ugly if they are ignored, especially with the last one.

Using some recursive helper functions as well as that miraculous parent variable will help ease the pain of programming this tree.

No comments:

Post a Comment