We are technically active bunch of students who can code, design , create solutions, use technology to innovate & make life easier.
Showing posts with label Article. Show all posts
Showing posts with label Article. Show all posts

Sunday, 12 March 2017




Bluetooth 5 and other Bluetooth versions are managed by BLUETOOTH SIG abbreviated from Bluetooth Special Interest Group. Which has around 30,000 member companies of various fields.

BLUETOOTH SIG's part, aimed at simplifying our marketing, communicating user benefits more effectively thus making it easier to signal significant technology updates to the market.

Officially, Bluetooth 5 is officially unveiled during a media event in London on 16 June 2016. 

Now, allowing internet of things features embedded in it, making world  more easier to connect and more user friendly. Also, supports transfers at 2 Megabit per second instead of the usual 1 Megabit per second.


It will provide 800% increase in broadcast messaging capacity, as well as "coexistence" with other connectivity technologies like wi-fi and 4G/LTE for "more robust connections". 

Bluetooth 5 boosts location services that is providing location based services and convey much more information to other compatible devices without forming an actual connection which includes enhancements to lower battery consumption.

Thus, Bluetooth 5 devices are supposed to be available in market by mid of 2017.

Let us know if you find this short update useful 😊.



Written by
Prince Hridayalankar


Check out one more article by him - Super Wifi

Monday, 31 October 2016

Github Banner Param Blog


Working in a team of people who are assigned and responsible for the same kind of role can lead to many conflicts. Take an example of an enthusiastic team of coders for an instance. They are all contributing to the same project by adding new feature and thus editing the same project’s file. Even for a single person working in a big project, it gets really complicated very fast and it becomes very difficult to keep track of all the changes made so far. For a large team it is simply not feasible. This is where “Version Control Software (VCS)” comes into the role and saves the day.

Today we will dig into the open source version control software: “Git” and have a little insight into its design as well as its importance. In last we will cover some basic and frequently used git commands.


What is a Version Control system?

But before jumping directly into the concept of git, I would like to talk little bit about “version control” itself. Version control is basically a system which keeps track of all the records corresponding to the changes made against the set files over a period of time. It allows you to revert a set of files or even the entire project back to a certain safe state, compare changes over time, see last modified change, track back the bugs or issues and even more. VCS reduces human overhead on tracking files as it is done automatically. Thus with VCS, we can modify, experiment, implement new features in project, without fear of losing the integrity of working project.

VCS system can be mainly characterized as either centralized or distributed. In Centralized Version Control System (CVCS), there is a main server that contains the repository of all the versioned files and any client can access the files from this central place. However, this type of setup has many disadvantages. Every node is connected to same main server. Failure of the server halts the system and thus makes it impossible for anyone to collaborate or even save the versioned change done by them.

To avoid problems of CVCS, Distributed Version Control System (DVCS) was introduced. In this kind of setup, the client mirror(or clone) the full fledged snapshots of the repository to its own local system.Thus every client has full record of all changes made in the project. Even if a node fails, it will not halt the whole system as every clone has full backup of all data. Git falls under DVCS. It enables many cool feature which we will later study.

Git was created by Linus Torvalds in 2005 for development of Linux kernel along with some other developers at initial stage. Git supports most of the major operating systems including OS X, Microsoft Windows and of course Linux as well.


Diving into Git

Let us try to understand git in the detail. Git stores all the tracking data for files stored in a git repository. For turning a project in git repository, first we initialize it inside the project’s working directory and add files that needed to be tracked. Files present in this directory can be sub categorized as either being tracked or un-tracked. We can change between the two using “git add” and “git rm” respectively. Git only sees the tracked files.

According to git, the tracked files lies in three states i.e., unmodified, modified and staged. At initial stage, all the tracked files lies in unmodified state. When we make some changes(or edit) in any tracked file, it changes its state to modified state. Before committing the tracked file, we need to add the given file into the staged state. Now If we commit the staged file, the current snapshot of the contents of your work tree at current moment is created and saved. Git stores a commit object which contains a pointer to the snapshot. Every time you commit your project, it takes whole picture of all your tracked files at current moment and store reference to that snapshot. If file is unmodified, Git simply link it to its previous identical version of the snapshot it had stored. These snapshot helps in implementing concept of version control. Git thinks about its data more like a stream of snapshots. Thus we can easily check the differences (i.e., changes) between different snapshots as well as between the currently staged files and any snapshot. This help in analyzing the changes made in the new version and gives the relevant picture. Don’t worry we will later dig further on how to exactly do it.


The lifecycle of the status of your files PARAM Blog
Figure 1. The life cycle of the status of your files. [Reference]


In git terminology, we refer to staging area as ‘index’, other two states (i.e., unmodified and modified) to be in ‘working tree’(i.e., files that you are currently working on) and all the previous snapshots created by commit command to be in ‘history’.

Git basic walk-through PARAM Blog
Figure 4. Git basic walk-through [Reference]

Now let us understand the concept of ‘branching’ which is considered to be an essential feature of VCS. Suppose you are working on a given project. Suddenly you realize you have to add a new feature (or resolve a bug) in your project. But you can not afford to risk the current progress of the project by adding experimental code which can easily mess up logic of whole project. One thing you can do is create a copy of same project at current state and make changes to it. If the changes are successful then make it the main (or master) project. However, consider the case where many people are working together adding different new features (or resolving different bugs) then multiple copies will generated and keeping track of all these copies is simply not feasible. We need a system which does not create multiple copies; were multiple people (or a single user) can take the snapshot of current state; work on it at different branch and later when they are happy with changes made; they can easily merge the changes into the main (or master) project. This is what exactly branching helps in achieving.

For getting an insight on how branching is done by git, we have to understand how git stores stream of snapshot and keep records of commit objects (i.e., metadata generated after every commit). It can be easily understood by taking help of diagram given below. Here we have a working directory. Whenever you commit the staged file (i.e., index), it is provided with an unique identification (i.e., id) number which gets generated via SHA-1 hash function. commit object (i.e., metadata about commit) gets this id number. All the commit object points to their parent node and so on. The current branch in working tree is identified by the pointer named HEAD. Main branch is identified by the pointer named MASTER. MASTER points to the last commit object in the master branch. According to this diagram, if we will generate a new commit object, it will get add after last commit object (i.e., ed489). Therefore, MASTER and HEAD will then point to this new commit object. Here if we have another branch (namely, maint), we can access it by referencing HEAD pointer to its position. If we generate new commit object in the branch ‘maint’, we will end up having two sub tree and two child nodes sharing a common parent node(i.e., a47c3). Thus our given project will progress in two different dimensions.


Internal structure of git records PARAM Blog
Figure 2. Internal structure of git records [Reference]


Now, after some time if we want to combine the two sub-trees into one. We can do that by using “merge” operation as shown in figure below. According to given diagram, we were present(i.e., checking-out) ‘maint’ branch and performed “merge master” operation. Merge operation creates a new merge-commit. It is used for documenting all the merges in the repository. In this diagram we have shown only simple type of merging technique which is called “fast-forward merge”. For in detail study of about git branching and merging algorithm follow this resource. [Reference]


Fast forward merge operation PARAM Blog
Figure 3. Fast forward merge operation [Reference]




Using Git

Now it’s time to get some hands-on git. First you must Install git and set the environment PATH for git in your operating system. [Reference]

After installation, first thing you do is set your user_name and email_id. These fields are mandatory as git uses these information every time you commit and it is immutably backed into the commit. Any other client can view these field in log file to identify who did what commit. Thus every commit object has three main metadata its id,  author_name (username and email id) and date (time-stamp when commit occurs). Now run following command in your console :

git config --global user.name "your_user_name" (enter)
git config --global user.email "your_email_id@example.com" (enter)

This will set your user name as “your_user_name” and email address as “your_email_id@example.com”. This much configuration is enough for you to get started. But for in detail configuration check out this resources. [Reference]

Now we shall cover the basic workflow and basic commands used step by step :

1. Initialize the git repository : After you have installed git in your system. First of all go to your working directory through your console. Note that all the commands that we will discuss later in this article are to be run on the console. Now it’s time to initialize git repository in your working directory. We can do this by following command :

git init (enter)

This will create an empty and hidden git repository or reinitialize an existing one. In this repository sub directories for head, object etc is created which helps git in tracking the progress. An initial HEAD pointer referencing master branch is also created.

2. Adding files into index : After you have initialized git repository, it is time to add files into tracked category. We can do this by following command :

git add [<file_name>] (enter)

This command can be used multiple time. It update the index by adding mentioned file from working directory. Thus it prepare staged content for the next commit which will get appended into historical snapshot. The git add command takes a path name for either a file or a directory. If it’s a directory, the command adds all the files present in that directory recursively.

3. Checking the working tree status : We can check the status of the tracked files by using following command :

git status (enter)

Above command also tells you branch you are on.

4. Record changes to repository : Now if we want to stores the current contents of the index for the snapshot hence create a new commit object. We can do that using following command :

git commit (enter)

Above command will open your default notepad where you can type your commit message. It is not mandatory but recommended as it is useful for quick reference for what the commit object was about. For changing the editor run the following command :

git config --global core.editor “<path/to/editor>(enter)

5. Checking changes between commits, commit and working tree : We can track the changes using running following commands :

For checking difference between your working directory and index :

git diff (enter)

For checking difference between a modified file (namely, file_name) and last committed node present in your local repository :

git diff HEAD [<file_name>] (enter)

For checking difference between staged files (i.e., index) and local repository files :

git diff --cached [<file_name>] (enter)

In output '+' represent line added and '-' represent line removed in your file.

6. Feature of branch : We can create a new branch using following command :

git branch [<branch_name>] (enter)

Above command will create a new branch namely ‘branch_name’ at current node. Now we can switch from current branch “master” (say) and to this new branch (namely, branch_name) by running following command :

git checkout [<branch_name>] (enter)

Above command will make HEAD pointer reference to this new branch thus changing the local branch. Now we can merge historical snapshot of the new branch (namely, branch_name) into the current branch (say, master) by running following command :

git merge [<branch_name>] (enter)

7. Viewing commit log : We can do this using following command :

git log (enter)

Above command is useful for tracing back the all the commits. It is metadata of commit object which contains author name, date and commit message.

I have only covered some of the very basic commands for getting you started. There is so much you can do and it is practically impossible to cover all of them in this article. I would recommend you to check out this resource for sharpening your skills. [Reference]

In the end, We can say git is “revision control” tool for managing your tracked file in your working directory. By far we have assumed that git repository is present in our own local system and thus can be accessed by a single user only. For team project, git repository can be hosted on remote server, and team members have their own local copy which they can modify and merge the changes back to remote. We can host the git repository in any server and there are many online platform available for hosting git repository like GitHub, GitLab, BitBucket etc.
The above code snippets are successfully tested on Mac OS X and Linux.

Other useful links:


Written by
Vertika



Saturday, 29 October 2016


Future of Wifi



Super Wifi or say TV White Space refers to the unused TV channels between the active ones in the VHF and UHF spectrum. These are typically referred to as the “buffer” channels. In the past, these buffers were placed between active TV channels to protect broadcasting interference. It has since been researched and proven that this unused spectrum can be used to provide broadband Internet access while operating harmoniously with surrounding TV channels.

In Electronics, referred by technical term as IEEE 802.11af, also referred to as White-Fi and Super Wi-Fi, is a wireless computer networking standard in the 802.11 family; that allows wireless local area network (WLAN) operation in TV white space spectrum in the VHF and UHF bands between 54 and 790 MHz. The standard was approved in February 2014. Cognitive radio technology is used to transmit on unused TV channels, with the standard taking measures to limit interference for primary users, such as analog TV, digital TV, and wireless microphones. In 2010, the FCC(US spectrum allocation authority ) made this highly effective yet underutilized spectrum available for unlicensed public use. With the use of a database manager and a White Space radio, these channels can be used to access broadband Internet connectivity.


White-Fi Concept

The basic concept behind White-Fi technology, IEEE 802.11af is that broadcast television coverage has to be organised so that space is left between the coverage area of different transmitters using the same channels so that interference does not occur.

Sufficient space has to be left so that even when tropospheric propagation conditions increase the distances over which signals can be received, interference does not normally occur.

This means that there are significant areas where these channels are unused and this leads to very poor spectrum use efficiency.

White-Fi Concept



Salient Features

The table below gives a summary of the salient features of 802.11af technology.

802.11af Characteristics Description
Operating frequency range 470 - 710 MHz
Channel bandwidth 6 MHz
Transmission power 20 dBm
Modulation format BPSK
Antenna gain 0 dBi



Benefits of IEEE 802.11af, White-Fi

There are many benefits for a system such as IEEE 802.11af from using TV white space. While the exact nature of the IEEE 802.11af system has not been fully defined, it is still possible to see many of the benefits that can be gained from White-Fi technology:

Better Coverage than Wi-Fi

While a traditional Wi-Fi router has a relatively limited range, around 100 meters under perfect conditions, and can be blocked by walls or other environmental barriers, TV White Space technology can cover an expanse of about 10 kilo-meters in diameter (100 times the distance)! This breakthrough technology was nicknamed “Super Wi-Fi” because of its superior range and ability to penetrate obstacles such as trees, buildings and rough terrain.

Non-Line-of-Sight (NLOS) Performance

Microwave links require line-of-sight (LOS) between the points being connected. In areas with rugged or forested terrain, the tall towers necessary to provide this line-of-sight connection make microwave an expensive and unfeasible solution. TV White Space technology provides an effective alternative to microwave by utilizing the lower-frequency UHF signals that can penetrate obstacles and cover uneven ground without requiring additional infrastructure.

NLOS Working
Picture Credit :- Carlson Wireless Technologies


Written by
Prince Hridayalankar



Wednesday, 26 October 2016


A quick introduction to Industry!

The companies and activities involved in the process of producing goods for sales, providing services on large scale by multiple revenue generating resources for the betterment of economy is known as an Industry.

The definition of industry is any large scale business of a type of productive manufacture, trade or services are called an Industry.

Industry is segregated into 3 major sections all across the globe:




1. Manufacturing Industry


Manufacturing industry is the branch of manufacture and trade based on the fabrication and processing or producing finished goods as per costumer’s expectations and specifications from raw materials by using dependent components.


2. Service Industry


Service Industry is a group of companies or an organisation which provides all kind of services by generating revenue from it. Starting for them selling of finished goods manufactured by the manufacturing industry to the services provided for mankind like hospitality, transportation, public entertainment etc.


3. Agricultural Industry




An organisation that practices production of crops and livestock from natural resources of the Earth. It is the only industry where production can be done by not using even a single man made product.

In my first article, I would like to write about the manufacturing industry. Covering several points such as what exactly is manufacturing industry, what is its role in Indian economy and lastly, how computer science is incorporated in it.

Manufacturing industry plays a very important role in many places. It is a source of attaining high rate of employment in any country. Any factory or organisation that comes under this industry has various departments which require different employees for different jobs which thereby increase the employment rate of the factory and finally, the country. It acknowledges all classes of employees starting from the lower class workers that work on daily wages to the highly qualified managers, CAs, HRs and other higher posts for different departments. In addition, this sector has a multiplier effect for job creation in the services industry also.

In layman’s language, a country’s economy is directly related to the sale and purchase within and outside the country. Also, how much did each countryman earn in a year. These points are directly related to the employment. More the employment, more is the economic growth of an individual and thereby the country. As per the recent analysis, manufacturing industry’s contribution to the GDP of India is 16%, which is a lot. Which itself states how important manufacturing industry is when we talk about the production and employment of any country.

Manufacturing industry works in a hierarchical and structural manner. It has multiple departments which bifurcate different jobs for the establishment and growth of the industry.

These departments are as follows:-

1. Sampling/Research Department
2. Marketing Department
3. Order management Department
4. Planning Department
  • Material requirement planning
  • Material purchase planning
  • Production planning
  • Machinery/Equipment planning
  • Electricity consumption planning
  • Maintenance planning
5. Purchase Department
6. Production Department
7. Quality Control Department
8. Store Department
9. Documentation Department
10. HR Department 
11. Admin Department
12. Accounts Department
13. Finance Department
14. Electrical Department
15. Maintenance Department

On the basis of this bifurcation of departments, there is another hierarchy that follows for the job which is both departmental and industrial. That hierarchy is as follows:

Industrial Hierarchy

Manufacturing Industries flow diagram :-

Manufacturing Industry Flow Diagram

As I just explained that there are multiple departments in the manufacturing industries, each department has to maintain all activities being held within the department for smooth and genuine working. As I have also shown the interlinking between all the departments in the industry, each department has to work as per the flow diagram i.e. each department has to send their data and information for further actions to be performed by the next department. Doing this manually, can be the most hectic and unwanted job in the world. Also, there will not be any proof whether each department has completed their job as per the given time schedule and if at all they have, then whether the task has been forwarded to the next department or not. Manually, we cannot make a structural and proper planning to get a successful output with an effective cost within the time schedule. In manual management company can run but their growth will not be to their at most and there can be huge loss because of a few fraud people and things can be mishandled and misused. This is where computer science comes into its action in an industry. As I mentioned that each department has to maintain their records, information and day to day activities, industries need a proper computerization which is at both departmental level and industrial level to maintain the records, enhance the quality of production in effective cost with minimum man power in a systematic and structural manner.

All the above mentioned departments also show that a manufacturing industry holds all kind of job vacancies. All these departments vary after computerization. Several other departments are added which provide more vacancies for the computer engineers and other IT professionals.

In my first blog, I would like to conclude that manufacturing industry cannot grow effectively without computerization. For manufacturing industries, it is a necessity to reduce the production cost, produce a qualitative product, minimize the man power, electricity consumption, machinery and equipment maintenance, maintaining day to day records of each department for the analysis of individual performance and thereby the organisation’s performance. This can be done most effectively by computerizing the industry. Software that can provide full resourced planning of an industry is called Enterprise Resource Planning (ERP) Software.

This article is PART- 1 of series on Industrial Computerization.

I shall talk more about its working in the next few blogs.


Written by
Kritika Kataria





Thursday, 20 October 2016

GFS or HDFS?

Google-Hadoop File System PARAM

Cloud computing is changing the way we use to access our data which may be in form of different file formats, powered by advancements in distributed computing, parallel computing, grid computing and other computing technologies. Which gives us the idea to store data distributed over different computers rather than local and server systems. In this article we only focus on two most advanced and most popular distributed file systems which are GFS or Google File System and HDFS or Hadoop distributed file system. Here we compare both of them in a more precise and accurate manner considering all of the aspects which make them different and similar.

Before moving further firstly we need to know about what is a file system in case of the distributed web systems. A file system is a subsystem of the operating system that performs file management activities such as organization, storing, retrieval, naming, sharing, and protection of files. It also frees programmers from headache of space allocation and layout of the secondary storage devices.
But in contrast of the distributed systems it's implementation is more complex than the local file system due to the fact that the users and storage devices are physically dispersed. It provides remote information sharing, user mobility, availability around the world and use of diskless workstations or transparent remote-file accessing capability.

Now the introduction part is over now we get into a more technical background. GFS is Google's own implementation which is designed to meet rapidly growing demands of their processing needs. Not open to all currently Google uses it for their Google apps and workloads. Whereas this is not the case for HDFS developed under Apache which is highly inspired by GFS but as an open-source alternative to satisfy the different needs of clients. In general the hadoop is an open-source implementation of the MapReduce framework.

Now what is this MapReduce thing?

Originally developed by Google researchers around 2003 later adapted in Hadoop systems. MapReduce is a programming model for processing large data sets with parallel distributed algorithms. In simple words in HDFS its main purpose is to map all the systems in Hadoop cluster and generate tuples (say computers where data resides) and then reducing them to get most appropriate set of tuples so that the accessibility is efficient by performing some estimation techniques.

Google File System (GFS)
Hadoop Distributed File System (HDFS)

In terms of file structure above pictures shows the basic model for the GFS and HDFS. As we can see that GFS is divided into 64MB chunks and each is identified by 64-bit to handle these chunks, replicated into three default replicas chunkservers. They are further divided into 64KB blocks each having checksum (32-bit) for data integrity. In HDFS we have 128 MB blocks division and here NameNode have block replica as two files where one is for the data and one is for checksum and timestamp generation called DataNode acting as chunkserver. 

Noticeable difference from GFS is that there are only single writers per file and client decides where to write. Here only append is possible but in GFS random file writes possible. GFS follow multiple writer, multiple read model and later follow single writer, multiple reader model. Another aspect of HDFS is that it is open-source hence provides many different libraries for different file systems in different languages and platforms (like S3, KFS, C++, Python etc).

The real world HDFS application is Yahoo (2010) having over 60 million files and 63 millions blocks with cluster of having 3500 nodes. It is handling about 9.8 PB of total storage. Tech giants like Facebook also implemented HDFS based data grid systems for handling huge amount of user data.

GFS is optimized for high availability and speed best suited for Google data storage needs and at the same time simpler than most distributed systems. Whereas HDFS is more specific to client needs and data management requirements. There are further developments in this field which introduced various new technologies that must be mentioned here like NFS (Network file system by Sun Microsystems), AFS (Andrew File System), Coda (by Carnegie Mellon University) and many more.


For in-depth study you can refer to these research papers-



Written by
Sameer Satyam

Know Us

We are technically active bunch of students who can code, design, create solutions, use technology to innovate & make life easier. For more visit our About Us page.

Our Team

  • Kritika Kataria (Head)
  • Kartik Sethi (Deputy Head)
  • Prince Haridayalankar
  • Akash Singh
  • Hema Yadav
  • More>

Contact Us

Name

Email *

Message *

Visitors