The linked representation of a binary tree is implemented by using a linked list having an_________

Construct Complete Binary Tree from its Linked List Representation

Given Linked List Representation of Complete Binary Tree, construct the Binary tree. A complete binary tree can be represented in an array in the following approach.
If root node is stored at index i, its left, and right children are stored at indices 2*i+1, 2*i+2 respectively.
Suppose tree is represented by a linked list in same way, how do we convert this into normal linked representation of binary tree where every node has data, left and right pointers? In the linked list representation, we cannot directly access the children of the current node unless we traverse the list.

Basic concepts and nomenclatureEdit

Each record of a linked list is often called an 'element' or 'node'.

The field of each node that contains the address of the next node is usually called the 'next link' or 'next pointer'. The remaining fields are known as the 'data', 'information', 'value', 'cargo', or 'payload' fields.

The 'head' of a list is its first node. The 'tail' of a list may refer either to the rest of the list after the head, or to the last node in the list. In Lisp and some derived languages, the next node may be called the 'cdr' [pronounced could-er] of the list, while the payload of the head node may be called the 'car'.

Singly linked listEdit

Singly linked lists contain nodes which have a data field as well as 'next' field, which points to the next node in line of nodes. Operations that can be performed on singly linked lists include insertion, deletion and traversal.

A singly linked list whose nodes contain two fields: an integer value and a link to the next node

The following code demonstrates how to add a new node with data "value" to the end of a singly linked list:

node addNode[node head, int value] { node temp, p; // declare two nodes temp and p temp = createNode[]; // assume createNode creates a new node with data = 0 and next pointing to NULL. temp->data = value; // add element's value to data part of node if [head == NULL] { head = temp; // when linked list is empty } else { p = head; // assign head to p while [p->next != NULL] { p = p->next; // traverse the list until p is the last node. The last node always points to NULL. } p->next = temp; // Point the previous last node to the new node created. } return head; }

Doubly linked listEdit

In a 'doubly linked list', each node contains, besides the next-node link, a second link field pointing to the 'previous' node in the sequence. The two links may be called 'forward['s'] and 'backwards', or 'next' and 'prev'['previous'].

A doubly linked list whose nodes contain three fields: an integer value, the link forward to the next node, and the link backward to the previous node

A technique known as XOR-linking allows a doubly linked list to be implemented using a single link field in each node. However, this technique requires the ability to do bit operations on addresses, and therefore may not be available in some high-level languages.

Many modern operating systems use doubly linked lists to maintain references to active processes, threads, and other dynamic objects.[2] A common strategy for rootkits to evade detection is to unlink themselves from these lists.[3]

Multiply linked listEdit

In a 'multiply linked list', each node contains two or more link fields, each field being used to connect the same set of data records in a different order of same set [e.g., by name, by department, by date of birth, etc.]. While doubly linked lists can be seen as special cases of multiply linked list, the fact that the two and more orders are opposite to each other leads to simpler and more efficient algorithms, so they are usually treated as a separate case.

Circular linked listEdit

In the last node of a list, the link field often contains a null reference, a special value is used to indicate the lack of further nodes. A less common convention is to make it point to the first node of the list; in that case, the list is said to be 'circular' or 'circularly linked'; otherwise, it is said to be 'open' or 'linear'. It is a list where the last pointer points to the first node.

In the case of a circular doubly linked list, the first node also points to the last node of the list.

Sentinel nodesEdit

In some implementations an extra 'sentinel' or 'dummy' node may be added before the first data record or after the last one. This convention simplifies and accelerates some list-handling algorithms, by ensuring that all links can be safely dereferenced and that every list [even one that contains no data elements] always has a "first" and "last" node.

Empty listsEdit

An empty list is a list that contains no data records. This is usually the same as saying that it has zero nodes. If sentinel nodes are being used, the list is usually said to be empty when it has only sentinel nodes.

Hash linkingEdit

The link fields need not be physically part of the nodes. If the data records are stored in an array and referenced by their indices, the link field may be stored in a separate array with the same indices as the data records.

List handlesEdit

Since a reference to the first node gives access to the whole list, that reference is often called the 'address', 'pointer', or 'handle' of the list. Algorithms that manipulate linked lists usually get such handles to the input lists and return the handles to the resulting lists. In fact, in the context of such algorithms, the word "list" often means "list handle". In some situations, however, it may be convenient to refer to a list by a handle that consists of two links, pointing to its first and last nodes.

Combining alternativesEdit

The alternatives listed above may be arbitrarily combined in almost every way, so one may have circular doubly linked lists without sentinels, circular singly linked lists with sentinels, etc.

TradeoffsEdit

As with most choices in computer programming and design, no method is well suited to all circumstances. A linked list data structure might work well in one case, but cause problems in another. This is a list of some of the common tradeoffs involving linked list structures.

Linked lists vs. dynamic arraysEdit

A dynamic array is a data structure that allocates all elements contiguously in memory, and keeps a count of the current number of elements. If the space reserved for the dynamic array is exceeded, it is reallocated and [possibly] copied, which is an expensive operation.

Linked lists have several advantages over dynamic arrays. Insertion or deletion of an element at a specific point of a list, assuming that we have indexed a pointer to the node [before the one to be removed, or before the insertion point] already, is a constant-time operation [otherwise without this reference it is O[n]], whereas insertion in a dynamic array at random locations will require moving half of the elements on average, and all the elements in the worst case. While one can "delete" an element from an array in constant time by somehow marking its slot as "vacant", this causes fragmentation that impedes the performance of iteration.

Moreover, arbitrarily many elements may be inserted into a linked list, limited only by the total memory available; while a dynamic array will eventually fill up its underlying array data structure and will have to reallocate—an expensive operation, one that may not even be possible if memory is fragmented, although the cost of reallocation can be averaged over insertions, and the cost of an insertion due to reallocation would still be amortized O[1]. This helps with appending elements at the array's end, but inserting into [or removing from] middle positions still carries prohibitive costs due to data moving to maintain contiguity. An array from which many elements are removed may also have to be resized in order to avoid wasting too much space.

On the other hand, dynamic arrays [as well as fixed-size array data structures] allow constant-time random access, while linked lists allow only sequential access to elements. Singly linked lists, in fact, can be easily traversed in only one direction. This makes linked lists unsuitable for applications where it's useful to look up an element by its index quickly, such as heapsort. Sequential access on arrays and dynamic arrays is also faster than on linked lists on many machines, because they have optimal locality of reference and thus make good use of data caching.

Another disadvantage of linked lists is the extra storage needed for references, which often makes them impractical for lists of small data items such as characters or boolean values, because the storage overhead for the links may exceed by a factor of two or more the size of the data. In contrast, a dynamic array requires only the space for the data itself [and a very small amount of control data].[note 1] It can also be slow, and with a naïve allocator, wasteful, to allocate memory separately for each new element, a problem generally solved using memory pools.

Some hybrid solutions try to combine the advantages of the two representations. Unrolled linked lists store several elements in each list node, increasing cache performance while decreasing memory overhead for references. CDR coding does both these as well, by replacing references with the actual data referenced, which extends off the end of the referencing record.

A good example that highlights the pros and cons of using dynamic arrays vs. linked lists is by implementing a program that resolves the Josephus problem. The Josephus problem is an election method that works by having a group of people stand in a circle. Starting at a predetermined person, one may count around the circle n times. Once the nth person is reached, one should remove them from the circle and have the members close the circle. The process is repeated until only one person is left. That person wins the election. This shows the strengths and weaknesses of a linked list vs. a dynamic array, because if the people are viewed as connected nodes in a circular linked list, then it shows how easily the linked list is able to delete nodes [as it only has to rearrange the links to the different nodes]. However, the linked list will be poor at finding the next person to remove and will need to search through the list until it finds that person. A dynamic array, on the other hand, will be poor at deleting nodes [or elements] as it cannot remove one node without individually shifting all the elements up the list by one. However, it is exceptionally easy to find the nth person in the circle by directly referencing them by their position in the array.

The list ranking problem concerns the efficient conversion of a linked list representation into an array. Although trivial for a conventional computer, solving this problem by a parallel algorithm is complicated and has been the subject of much research.

A balanced tree has similar memory access patterns and space overhead to a linked list while permitting much more efficient indexing, taking O[log n] time instead of O[n] for a random access. However, insertion and deletion operations are more expensive due to the overhead of tree manipulations to maintain balance. Schemes exist for trees to automatically maintain themselves in a balanced state: AVL trees or red–black trees.

Singly linked linear lists vs. other listsEdit

While doubly linked and circular lists have advantages over singly linked linear lists, linear lists offer some advantages that make them preferable in some situations.

A singly linked linear list is a recursive data structure, because it contains a pointer to a smaller object of the same type. For that reason, many operations on singly linked linear lists [such as merging two lists, or enumerating the elements in reverse order] often have very simple recursive algorithms, much simpler than any solution using iterative commands. While those recursive solutions can be adapted for doubly linked and circularly linked lists, the procedures generally need extra arguments and more complicated base cases.

Linear singly linked lists also allow tail-sharing, the use of a common final portion of sub-list as the terminal portion of two different lists. In particular, if a new node is added at the beginning of a list, the former list remains available as the tail of the new one—a simple example of a persistent data structure. Again, this is not true with the other variants: a node may never belong to two different circular or doubly linked lists.

In particular, end-sentinel nodes can be shared among singly linked non-circular lists. The same end-sentinel node may be used for every such list. In Lisp, for example, every proper list ends with a link to a special node, denoted by nil or [], whose CAR and CDR links point to itself. Thus a Lisp procedure can safely take the CAR or CDR of any list.

The advantages of the fancy variants are often limited to the complexity of the algorithms, not in their efficiency. A circular list, in particular, can usually be emulated by a linear list together with two variables that point to the first and last nodes, at no extra cost.

Doubly linked vs. singly linkedEdit

Double-linked lists require more space per node [unless one uses XOR-linking], and their elementary operations are more expensive; but they are often easier to manipulate because they allow fast and easy sequential access to the list in both directions. In a doubly linked list, one can insert or delete a node in a constant number of operations given only that node's address. To do the same in a singly linked list, one must have the address of the pointer to that node, which is either the handle for the whole list [in case of the first node] or the link field in the previous node. Some algorithms require access in both directions. On the other hand, doubly linked lists do not allow tail-sharing and cannot be used as persistent data structures.

Circularly linked vs. linearly linkedEdit

A circularly linked list may be a natural option to represent arrays that are naturally circular, e.g. the corners of a polygon, a pool of buffers that are used and released in FIFO ["first in, first out"] order, or a set of processes that should be time-shared in round-robin order. In these applications, a pointer to any node serves as a handle to the whole list.

With a circular list, a pointer to the last node gives easy access also to the first node, by following one link. Thus, in applications that require access to both ends of the list [e.g., in the implementation of a queue], a circular structure allows one to handle the structure by a single pointer, instead of two.

A circular list can be split into two circular lists, in constant time, by giving the addresses of the last node of each piece. The operation consists in swapping the contents of the link fields of those two nodes. Applying the same operation to any two nodes in two distinct lists joins the two list into one. This property greatly simplifies some algorithms and data structures, such as the quad-edge and face-edge.

The simplest representation for an empty circular list [when such a thing makes sense] is a null pointer, indicating that the list has no nodes. Without this choice, many algorithms have to test for this special case, and handle it separately. By contrast, the use of null to denote an empty linear list is more natural and often creates fewer special cases.

For some applications, it can be useful to use singly linked lists that can vary between being circular and being linear, or even circular with a linear initial segment. Algorithms for searching or otherwise operating on these have to take precautions to avoid accidentally entering an endless loop. One usual method is to have a second pointer walking the list at half or double the speed, and if both pointers meet at the same node, you know you found a cycle.

Using sentinel nodesEdit

Sentinel node may simplify certain list operations, by ensuring that the next or previous nodes exist for every element, and that even empty lists have at least one node. One may also use a sentinel node at the end of the list, with an appropriate data field, to eliminate some end-of-list tests. For example, when scanning the list looking for a node with a given value x, setting the sentinel's data field to x makes it unnecessary to test for end-of-list inside the loop. Another example is the merging two sorted lists: if their sentinels have data fields set to +∞, the choice of the next output node does not need special handling for empty lists.

However, sentinel nodes use up extra space [especially in applications that use many short lists], and they may complicate other operations [such as the creation of a new empty list].

However, if the circular list is used merely to simulate a linear list, one may avoid some of this complexity by adding a single sentinel node to every list, between the last and the first data nodes. With this convention, an empty list consists of the sentinel node alone, pointing to itself via the next-node link. The list handle should then be a pointer to the last data node, before the sentinel, if the list is not empty; or to the sentinel itself, if the list is empty.

The same trick can be used to simplify the handling of a doubly linked linear list, by turning it into a circular doubly linked list with a single sentinel node. However, in this case, the handle should be a single pointer to the dummy node itself.[8]

DefinitionsEdit

Recursive definitionEdit

To actually define a binary tree in general, we must allow for the possibility that only one of the children may be empty. An artifact, which in some textbooks is called an extended binary tree is needed for that purpose. An extended binary tree is thus recursively defined as:[11]

  • the empty set is an extended binary tree
  • if T1 and T2 are extended binary trees, then denote by T1 • T2 the extended binary tree obtained by adding a root r connected to the left to T1 and to the right to T2[clarification needed where did the 'r' go in the 'T1 • T2' symbol] by adding edges when these sub-trees are non-empty.

Another way of imagining this construction [and understanding the terminology] is to consider instead of the empty set a different type of node—for instance square nodes if the regular ones are circles.[12]

Using graph theory conceptsEdit

A binary tree is a rooted tree that is also an ordered tree [a.k.a. plane tree] in which every node has at most two children. A rooted tree naturally imparts a notion of levels [distance from the root], thus for every node a notion of children may be defined as the nodes connected to it a level below. Ordering of these children [e.g., by drawing them on a plane] makes it possible to distinguish a left child from a right child.[13] But this still doesn't distinguish between a node with left but not a right child from a one with right but no left child.

The necessary distinction can be made by first partitioning the edges, i.e., defining the binary tree as triplet [V, E1, E2], where [V, E1 ∪ E2] is a rooted tree [equivalently arborescence] and E1 ∩ E2 is empty, and also requiring that for all j ∈ { 1, 2 } every node has at most one Ej child.[14] A more informal way of making the distinction is to say, quoting the Encyclopedia of Mathematics, that "every node has a left child, a right child, neither, or both" and to specify that these "are all different" binary trees.[7]

Types of binary treesEdit

Tree terminology is not well-standardized and so varies in the literature.

  • A rooted binary tree has a root node and every node has at most two children.
  • A full binary tree [sometimes referred to as a proper[15] or plane binary tree][16][17] is a tree in which every node has either 0 or 2 children. Another way of defining a full binary tree is a recursive definition. A full binary tree is either:[11]
    • A single vertex.
    • A tree whose root node has two subtrees, both of which are full binary trees.
  • In a complete binary tree every level, except possibly the last, is completely filled, and all nodes in the last level are as far left as possible. It can have between 1 and 2h nodes at the last level h.[18] An alternative definition is a perfect tree whose rightmost leaves [perhaps all] have been removed. Some authors use the term complete to refer instead to a perfect binary tree as defined below, in which case they call this type of tree [with a possibly not filled last level] an almost complete binary tree or nearly complete binary tree.[19][20] A complete binary tree can be efficiently represented using an array.[18]

A complete binary tree [that is not full]

  • A perfect binary tree is a binary tree in which all interior nodes have two children and all leaves have the same depth or same level.[21] An example of a perfect binary tree is the [non-incestuous] ancestry chart of a person to a given depth, as each person has exactly two biological parents [one mother and one father]. Provided the ancestry chart always displays the mother and the father on the same side for a given node, their sex can be seen as an analogy of left and right children, children being understood here as an algorithmic term. A perfect tree is therefore always complete but a complete tree is not necessarily perfect.
  • In the infinite complete binary tree, every node has two children [and so the set of levels is countably infinite]. The set of all nodes is countably infinite, but the set of all infinite paths from the root is uncountable, having the cardinality of the continuum. That's because these paths correspond by an order-preserving bijection to the points of the Cantor set, or [using the example of a Stern–Brocot tree] to the set of positive irrational numbers.
  • A balanced binary tree is a binary tree structure in which the left and right subtrees of every node differ in height by no more than 1.[22] One may also consider binary trees where no leaf is much farther away from the root than any other leaf. [Different balancing schemes allow different definitions of "much farther".[23]]
  • A degenerate [or pathological] tree is where each parent node has only one associated child node.[24] This means that the tree will behave like a linked list data structure.

Q. Program to implement Binary Tree using the linked list

Explanation

In this program, we need to create the binary tree by inserting nodes and displaying nodes in inorder fashion. A typical binary tree can be represented as follows:

In the binary tree, each node can have at most two children. Each node can have zero, one or two children. Each node in the binary tree contains the following information:

Data that represents value stored in the node.

Left that represents the pointer to the left child.

Right that represents the pointer to the right child.

Algorithm

  1. Define Node class which has three attributes namely: data left and right. Here, left represents the left child of the node and right represents the right child of the node.
  2. When a node is created, data will pass to data attribute of the node and both left and right will be set to null.
  3. Define another class which has an attribute root.
    1. Root represents the root node of the tree and initialize it to null.
  4. insert[] will add a new node to the tree:
    1. It checks whether the root is null, which means the tree is empty. It will add the new node as root.
    2. Else, it will add root to the queue.
    3. The variable node represents the current node.
    4. First, it checks whether a node has a left and right child. If yes, it will add both nodes to queue.
    5. If the left child is not present, it will add the new node as the left child.
    6. If the left is present, then it will add the new node as the right child.
  5. Inorder[] will display nodes of the tree in inorder fashion.
    1. It traverses the entire tree then prints out left child followed by root then followed by the right child.

Solution

Python

Output:

Binary tree after insertion 1 Binary tree after insertion 2 1 3 Binary tree after insertion 4 2 5 1 3 Binary tree after insertion 4 2 5 1 6 3 7

C

Output:

Binary tree after insertion 1 Binary tree after insertion 2 1 3 Binary tree after insertion 4 2 5 1 3 Binary tree after insertion 4 2 5 1 6 3 7

JAVA

Output:

Binary tree after insertion 1 Binary tree after insertion 2 1 3 Binary tree after insertion 4 2 5 1 3 Binary tree after insertion 4 2 5 1 6 3 7

C#

Output:

Binary tree after insertion 1 Binary tree after insertion 2 1 3 Binary tree after insertion 4 2 5 1 3 Binary tree after insertion 4 2 5 1 6 3 7

PHP

Output:

Binary tree after insertion 1 Binary tree after insertion 2 1 3 Binary tree after insertion 4 2 5 1 3 Binary tree after insertion 4 2 5 1 6 3 7

Video liên quan

Bài Viết Liên Quan

Bài mới nhất

Chủ Đề