Harry's Tech Space

Read my experience with different products and technologies.

Category: Programming

Tips to programmers, explanation of various programming principles.

Website Loading: Basics of Authentication and Encryption – Part 2

Website Loading

(Credits: Flickr, patriziasoliani)

In the previous part of Website Loading series, I explained about application layer, the protocols that operate in it. That article told the basic grunt of work done by HTTP and DNS system. This article is to dispel doubts about whether the website loaded is secure or not. How will the website authenticate you?, Is your password under threat?, etc. All these things come under the domain of encryption and authentication.

Once a communication line is established with server during website loading, its necessary to ensure that communication line is not tampered with midway. This tunneling is ensured by using encryption.

While building communication with server – the client needs to prove his legitimacy. This proving of legitimacy is job of authentication. [Read More: Here is various authentication techniques explained.]

The security of system has 2 sources of attack hence 2 techniques are used. One source is false individual acting on your behalf (its impersonation at end points). Other one is person on network snooping on your communication (its eavesdropping). Encryption prevents eavesdropping. Authentication prevents impersonation.  Lets get on with authentication first.

Authentication: identifying correct user

In case of real word, we use names to identify a person. Yet we often hear cases where a person misusing name to get his work done. In case of banks they go one step ahead and use signatures to identify individuals. Incase of computers there are 4 things used to identify individuals. (Read More: Authentication  Techniques for mortals for knowing computer authentication models).

The computers use unique user names to identify individuals. Similarly we have names in real life to identify individuals. But username alone is not sufficient to identify as it can be misused. To help in this aspect passphrases were introduced. The passphrase is set when you first approach an organization to become its client. When you sign up for a service like mail, you are asked to set password, its asked to set a reasonable difficult password because, org doesn’t want your password to be easily guessable. The user name combined with passphrase identify an individual. If some one wants to  impersonate you, he needs to identify your username and passphrase.

The system merges username and passwords together and creates a string containing both. and this string is then hashed i.e.1- way encrypted and sent to server. The server then matches this encrypted string with its own credentials generated during sign up time to log you in. Due to one way encryption (aka hashing) the server doesn’t know your password, hence it is safe from misuse by server administrator too.


The passphrases is called by various names like passwords, PINs, OTP’s, etc. Even the passphrases can be generated on the fly by RFID Cards, Finger Prints, retina prints too. In short its the passphrase which identifies you uniquely. This pass phrase can be something you remember(password, PIN), or something sent to you(OTPs), or some thing you have(Key Gen App, RFID Card), or something you are (your fingerprint or retina print). All these things passwords, pins, OTPs, fingerprints, keygen codes at the end gets converted into an alpha numeric string hence the term passphrase. This passphrase is compared by computer to identify.


But as we know there are really idiotic/gullible users who share their passphrases, to overcome their idiocy/gullible nature another layer of passphrase was added. This system of username and 2 passphrases is called as 2 factor authentication. Often times this 2nd layer password is freshly generated and sent to user like OTP’s. This 2nd layer works on “what you have” principle than “what you know” principle used often for passwords. Also a device/app is given to user to generate these passphrases too like Keygen. OTP and RNG grids are what you have things as they are on your device and freshly generated. Even RFID can be used for this purpose but RF readers are not prevalent.


In case of bio-metrics, your finger print is used to generate a random alpha numeric string which is matched with server to identify you. Since bio-metrics are unique to individuals, a separate user name is not required. The encrypted alpha numeric string is used as credentials on server whereas the bio-metric aspect be it fingerprint or retina print becomes unique user and password combo. The process of converting this bio-metric info to alpha numeric sting is equivalent to encryption, and another one way encryption of this string prevents misuse at server side. To impersonate bio metrics, one is supposed to have same fingerprint or retina print which is impossible.

Encryption: Secured Website Loading

In real world, we use coded language to pass on the secrets(often used by 11th and 12th std. boys). All these coded language are essentially a form of encryption. The role of encryption is to prevent the unauthorized person in the middle snooping on you. In case of code words, the words are pre decided by friends at college, and they use it whenever possible. But the code words would not be decipherable by nosy neighbor of yours because, he will not be knowing the hidden meaning of them. Encryption also does the same. It converts your information into gibberish string, which can be decoded only by the intended recipient.

In encryption there are 3 types of it. Symmetric, Asymmetric and Hashing. The coded language in above example was of symmetric encryption.

Symmetric Key Encryption:

In symmetric key encryption, the encryption and decryption happen based on inputting a passphrase. In case of encrypting hard drives, a encryption algorithm asks for a password to be set while encrypting the drive. When the drive has been encrypted, same password is needed to decrypt the drive. Since the keys used for encryption and decryption are same, its called as symmetric key encryption.

Even case of coded language in above mentioned example, the code words are established by friends. Hence decoding of them is by friends only. Third party doesn’t know the code, hence unable to decode the meaning. This type of encryption is used to encrypt your mobile contents.

The main drawback with Symmetric Key encryption is, compromise of the passphrase, compromises you. Once passphrase is known, anybody can  decrypt and view your content. For this reason, symmetric key encryption is not used for securing web communication, but used for securing your device. For web another tech was created to over come this problem, It was called public key encryption.

Asymmetric key / Public Key Encryption:

To overcome the problem of key getting compromised. This dual key encryption was created. Here the keys required for encrypting and decryption are different. The server will send the public key to client. Client encrypts the content via public key. Then sends the ciphertext (the security parlance name for encrypted content) to server. Then server uses its private key to decrypt the ciphertext and retrieve the decrypted plaintext message. Since 2 keys are used its called as public key encryption.

With public key encryption one may feel secure, but this method is vulnerable to man in the middle attacks. In this attack an attacker would keep the public key sent by server to himself and send you a fake public key to you. You will send the message to attacker, thinking him to be safe server. The attacker will now have your credentials data to compromise you. To over come the problem, a reverse version of the same public key encryption is used.

In the reverse version of public key encryption, the server sends its public key generated by it as well as certificate containing public key issued by certifying authority. You receive both together. Once it  reaches you. The private key you already have (this key is given to you along with your operating system) is used to verify the certificate. Once the certificate is verified to be genuine, its validity period is matched with your computer’s date ( a warning is shown if your computers date is wrong as it fails the matching). After it only Website Loading works continue.


Many purists dont consider hashing as part of encryption. In hashing a variable length string is taken and mapped to a fixed length string. The specialty of this technique is, mapped string called as hash is totally different even if you change 1 character. So a thief cannot guess the username password combo by going through the hash.(PS: some have done so already with weaker hashes) As it was referenced earlier in password section, the hashes are stored on server to authenticate you. and hash is sent to server by using public key encryption. In case of online storage services like dropbox, its this kind of hash used to encrypt the contents you store on their service with symmetric key encryption.

Here are some myth busters:

  • Entering ATM PIN in reverse alerts the police.
    The Card Number and PIN are combined and its hash is matched by ATM before doing transaction, if the PIN entered is wrong then its hash will be totally different hence transaction fails. The server can only respond correct match or wrong match only.
  • Why login failure says ‘either username or password wrong’?
    While logging in servers take hash of both username and password combined. Even if there is 1 character defect in either, it causes hash of it to be different, hence server to be more user friendly flags it off as failure of either username or password as it doesn’t know the both.
  • Why does newer card request need new PIN too?
    When you change PIN on ATM machine, the machine already knows the card number. When PIN is changed, the ATM merges card number and new pin and its hash is stored on bank server. In case of newer cards the bank doesn’t know your pin, hence it replaces its old hash with new hash of new card number and freshly generated PIN.
  • Can bank misuse my PIN and impersonate me?
    No, the PIN mailer sent to you is generated by a RNG (a random number generator). The hash of composite key is stored on server. Hence no machine or humans know your PIN. It is also for that reason, a new PIN is generated when when lost instead of giving you old pin.

If this has aroused curiosity, dont hesitate to do a coursera course called “Internet History, Technology, Security” which digs deeper in this field.

Website Loading: What happens when you type www.google.com – part 1

Website Loading

(Credits: Flickr, patriziasoliani)

Whenever a person types in www.google.com in his address bar, behind the scene lots of works happen to load the website of Google. The the very act of website loading requires proper functioning of various elements of technology stack. There is DNS System helping to connect with the server. one needs to know about lots of lower level protocols to actually transmit the data. Also one needs to be mindful of downloading the images and all required assets for proper website loading.

Since the internet was a very complex project, it was split into independent layers to help technologists build various complex aspects of it. These layers combined together is called as “Internet Protocol Stack“. The protocol is just a set of rules, which needs to be followed by the software implementing it. The top layer protocols work independently of bottom layer protocols. All the layers are given a predefined responsibilities to perform.  The various layer of stack and their responsibilities are listed below.

Layers of Internet Protocol Stack:

  1. Application layer: This is topmost layer of internet protocol stack. This layer is tasked with interacting with the user. A Web Browser works in this layer. The Domain Naming System (which is helper system for name resolution) is also an application layer protocol. Various services like web browsing, e-mail, file sharing, are done by protocols of this layer itself.
  2. Transport layer: Transport layer provides various services to application layer via ports. This layer abstracts host to host services. (A server and clients computers are called as hosts). This layer provides connection oriented/connection less tunnel like reliability services by subdividing the data for easy transmission and sequencing of data at end host to be presented to topmost layer. This layer also ensures traffic congestion doesn’t happen between the hosts.
  3. Internet layer: Internet layer provides the end to end routing services to transport layer. Each computer/router is identified by an unique IP address, to help in routing. Also to help transmit a packet efficiently the routers shares their data with other routers via various routing protocols.
  4. Link layer: Link layer’s job is to transmit the packet from one node on a network to another node on network. (The nodes are various internet devices like routers, switches, computer’s network cards etc.) Here another addressing scheme called Media Access Control (MAC) is used for transmitting the data between 2 network nodes. The physical transmission protocols like WiFi, Ethernet etc are done in this layer. Also to establish routes protocols like OSPF, ARP, RARP, NDP is used. This layer is tasked with actual transmission of data between 2 IP Addresses.

These are 4 layers of TCP/IP Stack.

Website Loading: Players of Ecosystem

The world wide web is built on protocol called HTTP which stands for Hyper Text Transfer Protocol. Thats the main reason why websites show http:// in the beginning. The HTTP is application layer protocol designed to send HTML (Hyper Text Markup Language) documents which display a web page. Computers which understand the HTTP requests are called as Servers. Client is the computer, which requested the HTML resource by sending HTTP request. Browser is the program which interprets the HTML doc and displays it. The URL(Uniform Resource Locator) is addressing scheme used to identify web resource.

When Sir Tim Berners Lee introduced web for the first time, he designed all the components of ecosystem. They are – browser program, server program, HTTP protocol, HTML mark up language, URL addressing scheme. Below is some facts about the WWW ecosystem.

  • The first browser was called World Wide Web. Later renamed as Nexus.
  • The first server was called CERN HTTPd (CERN Hyper Text Transfer Protocol Daemon).
  • The first website was info.cern.ch.
  • The first URL was http://info.cern.ch/hypertext/WWW/TheProject.html.

Website Loading: Work done at Application Layer

When you type the site name in browser’s address bar, the browser first establishes connection with the server. The Server Address is obtained by querying the DNS. Destination Server address obtained via DNS is then embedded in transport layer’s destination address field. The HTTP request is prepared and given to transport layer in data field. (Note: HTTP uses Transport Layer protocol called TCP – Transmission Control Protocol for its communications.)

Website Loading Request:

The HTTP request consists of 3 main sections  at the top request line. The request line is like this.

<Method> <URL Path> <HTTP Version>

Ex: GET /index.html HTTP/1.1

The GET is request to server requesting it to give it give resource identified at given path. and HTTP Version its using. Below the request line other additional parameters are sent. These additional parameters are called as Header fields. Some header fields are mandatory and others are optional. (Refer to this wiki for details on header fields.) One has to note that Browser type is also one of the header field called with name user-agent:.

Server Response:

Once the query is made to server, the server searches in its resource pool and gives the response. Like request the HTTP response also starts off with status line. Below status line the usual headers follow. One has to note that Server also identifies itself in a header field called server:. After a blank line the response body begins containing HTML code.
The HTML response like request has 3 main sections in its status line. The status line is like this.

<HTTP Version> <Status Code> <Response Phrase>

Ex: HTTP/1.1 200 OK

The headers follow the status line followed by body containing resource requested. The status codes are subdivided various series.( Refer to this wiki article for list of all the status codes.)Remember that 400 series status are because of client i.e. browser made mistakes. 500 series errors are because of server problems. 300 series requires client browser to take additional actions. The famous 404 error means client requested resource which doesn’t exist, hence its client side mistake. Error 500 which is bloggers like me encounter a lot, means server has gone kaput for some sort of mis-configuration, means mistake is at server side.

DNS resolution:

The work of DNS is to fetch the IP address of the server, only after this browser can continue its website loading works. (Note: DNS uses Transport layer protocol called UDP – User Datagram Protocol for communications.)

Whenever you browse a website, its IP address (aka A record) is stored by your operating system in DNS cache for later use.  When a website’s IP address / A record is not available in DNS cache of the OS, a DNS query is automatically sent to your ISP’s(Internet Service Provider) Recursive Resolver. If recursive resolver doesn’t has A record (PS: often times it has) it keeps you waiting and asks the Root Nameserver for it.(PS: there are only 13 root nameservers. They have links to all the TLD’s) Root nameserver forwards the query to appropriate TLD nameservers. (E.g. query to www.google.com will get forwarded to .com TLD nameserver.) The TLD nameserver forwards the query to authoritative nameserver which gives the A record. Once recursive resolver fetches the A record, it keeps a copy with it and sends the record to you.


The above mentioned steps are done during website loading. The activities of all these protocols is done at application layer, which sits atop Transmission, Internet and Link layers which in turn do lot more work to keep the internet running. So its worth while to consider the WWW as a public web with decent gentlemen doing the background work. If you have noted the header fields, servers do have lots of information to identify a computer. Its because of that efficient communications happen. If you want to take cue about privacy from above explanation of headers, understand that WWW is public. Only thing stored in your computer or encrypted content is private.

Writing my “First Android App”

android studio IDE in which I wrote my First Android App

Few days ago I gave my source code of First Android App to tech section of SKDRDP as my computer broke up. The app I was building was for Cash Collection Tracking which will be used by Field Staff of SKDRDP. It was one heck of journey to build the first android app. This app building forced me to use many  of the core android features namely Activities, Services, and Content Providers. On top of that I was not a “professional coder” but a MBA who codes. Do read on how an app taught me coding.

The events leading to picking me up:

On one fine day in month of May I was approached by  staff of tech section asking whether I knew android application development. I replied them as yes but I am not a proficient developer as I did course long ago. Later they asked me to help them in building their app for cash collection. Technically they wanted me to port the app but I had to resort to building form scratch as that was much easier. Then we had tons of discussion how they had built the app for Nokia Platform, what all features they were expecting etc… I categorize this discussion as  “Requirements” Capture phase.

Understand the app’s requirements(what it is supposed to do?) correctly, that will avert costly reworks down the line.

After all the discussions were over, I asked them how they came to know about my skills in Android and they told me, that they found out using Google. I was bit astonished as My Name in Search Term wouldn’t have revealed Android aspects of me. I later came to my desk and checked various combinations of Keywords which would throw out my name in top search results. After trying for hours, I decided to key in the same term which I would search If I were looking out for developers, That is  – “android developer Dharmasthala”. Which would lead to me ruling the search results. Also my article on setting android IDE adds relevancy in the eyes of search engine. They went from search result to my blog to read this article How to set up Development Environment for Android? and later decided to contact me.android developer Dharmasthala - Google Search the thing that lead my First Android App


After I was on-boarded, they got permission of SKDRDP’s CEO to help them in building app as I report directly to him. Once the permission was obtained I got going. But the events that culminated in me doing the app rested on “bad events” shaping my past.

Dots of Past:

I did my Android App Development course soon after I finished my MBA somewhere in 2012. At that time majority of the budget phones were running Gingerbread 2.3.3 version of android. later I had to quit one of the job, But immediately after quitting I ordered some books that would change my path forever.

The above said books changed my path in programming forever. Design Patterns affirmed why I need to handle every thing via Interfaces only. Refactoring helped me in changing patterns to make code readable. Code complete is one which gave a big bird’s eye view of Software Construction landscape. Luckily Steve Jobs speech did help in affirming my gut feeling about ordering these books.

Apart from books there were much of learning from Unrelated courses done on sites likes Coursera, EdX, Udacity. One of the key features of these courses were on having our own coding style. If you see the code I have written  on github you can see glimpses of my coding style. The Code Complete also focuses on this.

Every Programmer is an Artist. Like every artist they have their Style etched into their works.

Role of Design Patterns and Code Complete 2 in My First Android App:

Some time after I did my course on android, I was introduced to “Design Patterns” book. I was so much immersed in this book, I even flunked in Google interview. Its because of this book I can appreciate why Java doesn’t support multiple inheritance for classes but allows the same for interfaces. Why there is clone() method attached to every java class. Also using the techniques like always pointing to the interface and serving requests to it.

Majority of problems in programming are at times repeating in nature. So chances are there for another programmer to encounter the same in past as well future. For that reason, the design patterns was written. The patterns were distilled by looking at various practical problems and how they were solved in past. The jUnit testing framework also depends on these patterns itself. By giving names to these patterns it becomes part of programmer’s lexicon.

Access a class through its interface only. Don’t break encapsulation.

If work can be accomplished by Object composition do it, don’t inherit.

The major role design patterns played was in preparing the interfaces. With stricter interfaces, it was easy to communicate with other parts of program. Database was abstracted into Content Providers. UI were segregated into their own package with DB operations abstracted. If activity had to communicate with adapter, it was enforced on usage of that adapter’s interface instead of direct access. The main advantage of this was in Code Readability.
Class structure of my First Android App

Code like boolean isMemberPresent = memberAdapter. isPresent( memberAtPosition ); is readable, isn’t it?

The other book that played a pivotal role in bringing the app to fruition was the Code Complete 2nd Edition. Its this book that cemented my style. In above code example if member at position was replaced with i, the code wouldn’t be that readable. The code complete has many more tips like that. Its based on the tips and techniques given in book my first android app could be completed instead of throwing my hands at first sign of trouble.

Refactoring and My First Android App:

The books of code complete and design patterns did provide me with a solid foundation in programming. It was refactoring that smoothed the rough edges of my skill and made things really manageable. Refactoring mean making small meaningful changes to program. Its the various refactorings that made code more readable.

refactoring grab


Earlier my logic in fragment swapping was embedded deeply in method in Fragment Class. Later I ran Extract Method refactoring to separate the fragment swapping into its own method. Once this was done I moved the method to activity as the job of managing fragments was supposed to be in Activity. Rename was used excessively as I preferred names which are easy to tell and contextual. The reason why I kept every thing simple was because I wrote the code which would be read by tech section. Also the ultimate responsibility of extending the app was resting on them, simple code was paramount.I also did some “Replace Inheritance with delegation” on my text watchers and on click listener classes. As I was moving them to my newly formed Adapter class. All these refactorings were done with purpose of keeping the encapsulation intact. Its because of these refactorings the code of my first android app is “Readable”.

How GitHub helped in versioning my First Android App:

Initially I disregarded importance of keeping the version control in place. I later did one code change and it broke the app completely. Luckily I was able to figure out that code change itself was buggy. Then I rolled back all the changes and immediate setup GitHub on my PC.


Once the version control was set up. I integrated my IDE with it. Its because of this I was able to write what all things I added to code in English. The naming convention for version numbers was 1.x was for major UI change. 1.x.x was for adding a small new feature. the 4th digit was for smaller fixes done to code. Also in code complete 2 there was mention of creating daily builds hence I committed all the changes I made at the end of the day..With each commit a build was created and tested. Due to faster commit and build cycle the debugging was considerably easier. The commits with D in names were builds which I had set up for debugging purposes. That code was littered with Log. d( "xxxx", "yyyyy" ) messages which kept pulse on everything happening in my program.

The main advantage of Version control was I could only concentrate at what all was changed and fix if the build was broken. This concentrated scope aided a lot in debugging. With each commits synced to GitHub at the end of they I was guarded against my PC breaking up, which it did when I finished the app.

This was my journey of writing first android app. Do share your views and first android app experience in comments.

Authentication techniques for “mortals”

Facebook authentication

Authentication screen of facebook

The most common thing every designer has to deal with is ‘authentication‘. In simple words, person logging into Narendra Modi’s account is Narendra Modi himself, not Roudy Ranganna. In case of real world you see his face and authenticate (not thinking about thing called ‘humshakals’ and impostors. 😉 ). But in world of computers, which is simply too powerful than our real world, it gives you a set of choices. There are 4 authentication techniques for users. They are

  1. What you know? (E.g.: Passwords and PIN)
  2. What you have? (E.g.: Key-cards, RFID cards, OTP’s, Passes)
  3. Who you are? (E.g.: Fingerprint scan, Face Recognition, and other Bio-metrics)
  4. Where you are? (E.g.: Location tracking, I.P. tracking)

1. Authentication based on ‘What you Know?’

In ‘what you know’ based authentication, the 2 parties decide on a secret phrase to identify each other at the beginning. While logging in or doing transaction, this secret phrase is asked, which is then matched to authenticate. (PS: In all the websites its the encrypted code of password is matched, actual password is immediately encrypted.)

If you share this secret phrase with your friend, then your friend can use the service appearing as you. If your friend becomes greedy and misuses the service, it will be you, who will be first to get caught. Based on recent events, Don’t worry about getting prosecuted, because you are ‘donkey’ in eyes of law, not the perpetrator of crime.

2. Authentication based on ‘What you Have?’

In ‘what you have’ based authentication, the 2 parties decide on a thing to identify each other. All the banks in India send a thing called OTP to your mobile for authentication. Theaters give you a thing called movie pass to authenticate you. Companies give you a thing called RFID cards to authenticate you. Software vendors give you a thing called licence file to authenticate you.

‘What you have’ authentication is comparatively bit expensive but more stronger than ‘What you know?’ authentication. Also this authentication technique is vulnerable to sharing of the thing. Also this technique requires some physical infrastructure to give you the thing while signing up.

3. Authentication based on ‘Who you are?’

In ‘Who you are?’ based authentication, the authentication is based on your physical features. Some examples of this is Fingerprint scanning, Retina Scanning, Face Recognition. In case of criminal investigation DNA’s are used.  Since the physical features are unique to an individual, during the signing up phase, a copy taken by one of the party, and a pattern matching is done to authenticate.

‘Who you are’ authentication is by far the most expensive one and strongest one too. This authentication technique cannot be used on internet just because of shear volume required to do it. Being based on unique features of body, sharing problem doesn’t arise at all.

4. Authentication based on ‘Where you are?’

This by far is newest entrant in the world of authentication. In  ‘where you are?’ based authentication, the location of person is used to authenticate. Due to difficulty in ascertaining real-time location data of a person, this technique is often used as add on layer of security. One such example of this technique is the notification by facebook while logging in from different location. It asks you to save browser if you logged in from different location. It even sends the mail to your mail ID notifying login. Normally IP address or GPS data is used to ascertain the location.

Tips to follow:

  • Never share your secret (password or PIN) with anybody. Its difficult to track down perpetrators if crime happens.
  • Sharing of Identity is crime. Don’t complain if you get hacked, you are one who let the thief in.
  • If you can afford to purchase system based on who you are authentication then do it.
  • Save your passwords in your Brain(if there are lesser numbers of it to remember) or in Powerful password managers like Lastpass, or Keepass.

Do share your ideas in comments section.

Here are some articles I have written on security,

A Rational Mind’s thought on Internet of Things (IoT)

Now a days there is lot of brouhaha over Internet of Things, a special version of internet where things can communicate. This bug was inserted into me by Giga Om. If you ask any so called experts what Internet of Things is they will start giving you a view of future where traffic lights are controlled over the internet by signals sent from vehicle density sensors, a Refrigerator ordering milk when its over.

The applications of Internet of Things are varied, but Internet of Things at its core is simple. The main issue with Internet of Things is protocol which machines must understand, a target which is impossible for for-profit companies. In order for Internet of Things to succeed A Nest product must work nice with GE Product, Philips product, which is impossible. The Internet of Things device work like dual function devices. The devices can act push devices or receive devices, whereas majority of protocols work like server, responding when some human client asks. Before jumping into Internet of Things, its necessary know the layers of TCP/IP stack and the responsibilities of each layer.

The TCP/IP Layer and Responsibilities of Each Layers

The 4 layers of TCP/IP model are:

  1. Link Layer [Wikipedia]
  2. Internet Layer [Wikipedia]
  3. Transport Layer [Wikipedia]
  4. Application Layer [Wikipedia]

The Link Layer is bottom most layer in TCP/IP Suite. Its combination of Physical and Data Link Layer of OSI Model.The main responsibility of this layer is to send the frames reliably from host(current device) to device at other end of line. The device at other end can be router, PC, mobile etc… The line can be physical line like DSL, Ethernet,ISDN etc.. or wireless lines like Wi-Fi, Bluetooth, GPRS etc..

The Internet Layer sits above Link Layer. This layer corresponds to Network layer in OSI Model. The main responsibility of this layer is to send the packets across vast network of devices called as “The Internet”. If I enter www.google.com in my browser, its responsibility of internet layer to contact Google. The IP(Internet Protocol) works at this layer.

The Transport Layer sits above Internet Layer. The main aim of this layer is to provide end to end connection reliability. The TCP gives impression of fixed path for information flow. This layer also provides for buffer control, where sender and receiver communicate at mutually acceptable speeds. The protocol of this layer provide for ports to allow multiple application layer protocols to send data.

The Application layer is topmost layer of TCP/IP as well as OSI Model. Its this layer which interacts with users. There are huge array of Application Layer protocols. Bit Torrent, HTTP/HTTPS, FTP, SMTP are some of the famous ones. If one wants to write a protocol for Internet of Things, its this layer which protocol has to function.

My thoughts on Internet of Things Protocol(IoT Protocol)

The Internet of Things is more over like ‘skynet’ of terminator series. Though humans have lot of independent thinking which enables them to even understand ambiguous statements, computers are not intelligent enough and all of them needs to talk same language(protocol) to understand each other. With companies promoting their proprietary shits like Health Kit, Android Auto, Car Play its impossible for IoT to take off because of the incompatibility between the two.

With grand ambition of bringing almost all forms of machine under this umbrella, there are certain use cases which are apparently visible. And certain mannerism required from device receiving and sending the commands. The list of expectations from the protocol are.

  • Interoperability: the protocol must be able to send messages to devices which not made by same manufacturer and also push different class of device like Fridge, TV, Car, Traffic Light, etc…
  • Push / Pull operations: The device must be able to Push events to other device like bell ringing events to devices having speakers etc.. It should also ask device to respond to its queries, like smartphone asking temperature sensors to report temperature inside an enclosure they are monitoring.
  • Command Groups: Like the processors have command sets which are specialised in works they do like Encryption, Media Encoding, etc.., the protocol is supposed to have command sets for things like traffic and vehicles, health, home automation, power and electricity, defence and offence, etc..
  • Other Protocol compatibility: The way when a command is executed the operating systems gives way to command program to do what ever it wants with users, this protocol must only send command to concerned device and relinquish control to users. For example if user send command to TV to play a video present on home NAS Box, the protocol must establish connection between TV and NAS Box and let the video streaming protocol to do the work of video playback.

In this way the protocol must act like operating system. Keeping track of all resources and devices on network and let the other protocols do the user Interfacing functions.

The simplest IoT protocol that comes to my mind is made of 3 parts.

  1. The first part can be text like Push/Pull which indicates handling devices what it expects. whether message is pushed to it cause an event has fired(like bell ringing), or pull indicating that I need to do some thing and I need data from you(like smartphone asking for temperature data).
  2. The second part is Command Identifier. The Java like package system is suitable in this case. for example commands like ‘auto.horn’ or ‘home.bell_ring’ are more easier to understand and manage.
  3. The third part is option list which is optional. Here one can adopt lot of schemes. Ideal is bundles of key value pairs, which will allow the data type of ‘value’ to be anything it wants – Boolean, numeric, character, data structure or other. There can 0 to N Key Value pairs in option bundle. For example options like temptype=’celcius’ for command asking temperature ,activate=’true’, protocol=’streaming video’ ,setGear=’3’ are good. Here setGear is key and 3 is value.

This is what I think about Internet of Things and how the protocol should work. The way computers took out the problems of calculation from our backs, this IoT will take out the needs to control the machines all the time. Article is licenced under creative commons hence don’t hesitate building one such open source protocol mentioned. I would like hear out your ideas, do share it in comments.

You are reading an Article by Harsha Ankola, originally posted on Harsha’s Tech Space. If you have enjoyed this post, be sure to follow Harsha on Twitter, Facebook and Google+.

Void Pointers and its impact on Learning

Void Pointers

The concept of pointers in C Programming is one of the most powerful and very error prone concept. In C Programming a pointer is said to be a special type of variable which points to a particular address in memory. Memory address is actually a particular byte in memory. 0th address refers to 0th byte.

The programming languages provides for a concept of Data Type. there are types like Integers, Characters, Floating Point Numbers, Arrays and lot more. In C language the data types differ in way the data is stored. character type store data in 1 byte. Integers store data in 2 bytes. So if a pointer pointing to character type can accurately determine value of character type data only. If by some error this character pointer is made to point to different type of data(for example integers), it will read the data incorrectly.

The validity of pointer is based on where it is pointing and type of data it is pointing to. So while declaring pointers, its referencing types was supposed to be mentioned. i.e. a pointer pointing character type must be declared as character pointer, pointer to integer as integer point and so on. But this strict scheme of declaring type of pointer is okay if the programs which used pointers behaved properly with addresses allocated. But there were special cases of memory being requested from OS for program’s secret purposes. Such special cases required a generic pointer type. A generic pointer which can point to any thing, also meaning the pointer which can write to and read from its address any data type. This concept is akin to purchasing a piece of land without stating purpose and using it for our own ends. This generic pointer is called as “Void Pointer”.


The learning process of Humans and Machines depend on filling up some data in some section of memory, and that data being read, interpreted and processed by central computer i.e. brain. All the data we learn are normally tagged with its higher level concept. Ex:- If I learn that behavior of people is based on belief they have, I remember this learning as something pertaining to human behavior and attach a mental tag of “human behavior” to this. This concept of tagging helps us to retrieve things from our memory with much ease. Google also uses similar concept to display the SERPs.

But the data we get from environment or others do not come with the requisite tags. A manipulative person can attach wrong tag to data so that any actions based on it are misled. We can correct the mistakes of wrong tagging only when we know that particular thing is wrong. But the primary question is “how to determine particular thing is Right or Wrong?”

  1. is the right wrong decision based on credibility of person/data source giving me the info?
  2. is it based on gut feeling I am having regarding it?
  3. is it based on exclusive info which I only have?
  4. is it based on wide public opinion?

This quest for data credibility brings us to world which would give a lot of practical insight into “How Google works”. The above questions are from similar set of questions determine how Google differentiates between credible knowledge and spam.  The underlying technology which Google uses to scan the web and we use to understand the world can be called as learning / crawling. In these processes our brain declares a void pointer to empty section of brain and stores the content it gets. If there is pre existing data with similar tag the both old and new data are compared and either one of them is stored.

Having a void pointer mean the person first accepts everything and then evaluates it. We call such people as open minded,  as they accept everyone for their words, but If such people cannot come in contact with more people then their head is filled mostly with lies given to them by outsiders. In case of void pointers the data is tagged based on whatever tag is given. But with alerted mind the data is also given an extra tag called “may be biased”. This extra tag makes that person to seek the unbiased truth. At the end of the day they key takeaway from learning process is “Truth”. In order to find it, one has to search a wast ocean of data, developing knowledge continuously and applying the knowledge in searching. If knowledge doesn’t expand then truth is always “Mission Impossible”.

You are reading an Article by Harsha Ankola, originally posted on Harsha’s Tech Space. If you have enjoyed this post, be sure to follow Harsha on Twitter, Facebook and Google+.

The Great Hadoop Operating System for Big Data

Some days ago I was going through an article on Big Data. I couldn’t make a head or tail of it. I immediately asked IBM & My Colleague working at IBM to help me to get going on big data. Both of them forwarded me to Big Data University, and to free e-books by IBMers.

I was going through the book, and actively trying to link the different pieces like HDFS, Map Reduce, Hadoop, Pig, Hive, Jaql, Zoo Keeper, Flume etc.. Then I realized:

Hadoop and and the different components make a specialized computing system for big data.

The different components of Hadoop and the components of OS is almost similar.

What’s Computing System??

Computing System (CS) (this term is coined by me so don’t google it) is comprised of many components. The different components are

  1. Storage system to store the data submitted via Input (E.g. Hard Disks)
  2. Input devices which produce data streams (E.g. Keyboard, Sensors)
  3. Output devices (E.g. Screen)
  4. Operating System for managing the show for users and hardware (E.g. Windows, Mac)
  5. Machine Language aka Machine Instruction Set (E.g. Intel SSE, Intel MMX, Intel VT-X)
  6. High Level Languages for writing apps and scripts (E.g. C, C++, Java, Python)
  7. Application and System softwares to do the user defined tasks, as well as managing the high level system activities. (E.g. MS Word, Photoshop, C Cleaner, Antivirus, Disk Defrag)

The Storage System is one of the important abstraction in computing System. For the Users the storage system gives illusion of Folder File Tree structure but the files are actually stored  as blocks of fixed sizes on hard disk platter. Primarily the main function of storage system is to give users Easy to Manage abstraction of storage function and handle the difficult process of physical storage all by itself.

The Input devices produces stream of data. We split streaming Input into 3 things. The first is source which produces data, the filters is 2nd thing, which process the source stream. The final component is sink, which is the destination for stream. Incase of Computers source will be keyboard key press data. The filter will be the Controller which performs the complex filtering operations like input validation etc. The file in which data is stored is sink.

The operating system is the interface between computer and users. The operating system performs 2 functions of managing the hardware resources and providing interface to users to do their tasks. UI is like layer for kernel and doesn’t include much of complexity like resources. The system management is tough nut to crack, hence kernel has lots of things like process mgmt., memory mgmt. modules, and loads of technologies and algorithms working behind scenes to make system useable to users.

Every processor comes with with its own of instruction it can understand. These instructions form the part of assembly language, and instruction set is called as machine instruction set.

The high level languages are created to make programmers job easy. The high level language programs when compiled/interpreted they create sequence on machine instructions. HLL’s also provide higher level of abstraction so that programmers can choose to focus on complex problems instead of optimizing code for machine.

The application and system softwares are created with  high level language and solve specific problem of users. Like Adobe PageMaker solved publishing problem. MS Excel solved spread sheet computation problems.

Hadoop Ecosystem and Compute System

The CS and Different Hadoop Ecosystem components have lots of Similarity between them.

  1. Storage in CS is similar to Hadoop File System (HDFS). The HDFS is Distributed Storage System and the way data is actually stored in HDFS/CS and How we view data is totally different.
  2. Apache Flume is Input equivalent of CS Input Device. Flume routes data into HDFS. Flume can be viewed as log data continuously being stored in a file without any user intervention.
  3. Hadoop is like Operating System which manages the show for User as well as manages the Resources. The way OS has many components like Resource Managers, Kernel, File systems – Hadoop has different components like Hadoop Core, HDFS, Hadoop YARN, Hadoop Map Reduce
  4. The Map Reduce Framework is like Machine Instruction Set.
  5. The Pig, Hive and Jaql are High Level Languages the way we have C, Java, Python in CS. The commands in above languages are converted into corresponding Map Reduce Jobs.
  6. The Mahout, HBase, Cassandra, Ambari, Zoo Keeper are the various Application and System Softwares equivalents running atop Hadoop.

You are reading an Article by Harsha Ankola, originally posted on Harsha’s Tech Space. If you have enjoyed this post, be sure to follow Harsha on Twitter, Facebook and Google+.

How to set up Development Environment for Android?

Weeks ago I Plunged into the world of Android App Development. Sometimes the Instructions given are not Straight forward, causing little bit of trouble for “rote memorizers” or in industry lingo “fresher’s”. Setting up DE(my shorthand for Development Environment) for Android is akin to “cooking”!.You need some ingredients and need to follow the process.

Ingredients required:

The things required for setting up Android development environment are.

[Note:* links take you to appropriate download pages for you to download version appropriate for your system.
**ADT link downloads 20.0.0 version of the plugin to your system directly, better use alternate link given below to download it]

All the Apps for Android are written in Java. Along with it the ADT, SDK, and Eclipse are written in Java and require Java Runtime to work. Hence JDK is most important ingredient.

The UI of Android apps is written in XML and Logic is written in Java. The code required for a simple “Hello World!” android app would run into 100’s line of code. Since there is huge amount of coding is involved in android its necessary for one to use an IDE. Since “Eclipse” is the most popular IDE for Java and also It supports plugins, to extend its functionality, it is used widely in Android App development too.

The Apps in Android are written in java language but android doesn’t support Java Classes, and also since android apps run on custom built Dalvik VM, Android has its own set of Libraries along with Libraries from other vendors. Hence Its necessary to install Android SDK. (note:The SDK tools installer downloads only the tools and Virtual Device manager to reduce the download size. platforms, documentation etc. have to be downloaded separately)

The Android ecosystem is constantly developing, hence its documentation and API’s are constantly upgraded. Also AVD (Android Virtual Device) uses Android System Image, All these consume huge amount of Disk space (mine is using 7GB and has almost all Tools and API’s). In order to manage this huge download, SDK Manager is given which checks for Update to tools and also New API’s. All the Platforms, Packages has to be downloaded by SDK Manager only.

Eclipse doesn’t support the Android Development out of the box. It requires plugins to support development. ADT is the official plugin to do this. ADT Helps in connecting Eclipse IDE to Android SDK. Also helps us to manage AVD’s and SDK’s.

Setting up a.k.a “Cooking” Android:

Now we shall set up the Development Environment.

Step1: Install the JDK

Download the JDK from Oracle[link:http://www.oracle.com/technetwork/java/javase/downloads/index.html]
Once download install the JDK, in your system.
Once installed add the JDK Installation Folder to environment variable Named “Path”. (Note: this path setup is not mandatory if you are using eclipse, but it is good to do so.)

Images below illustrate how to do this.

JDK Path 1.

Right click on my computer and go to properties.

JDK Path 2

Then in advanced Tab, click on Environment Variables.

JDK Path 3

Search for Path variable and click on edit.

JDK Path 4

Then add the JDK installation path to the end(varies from system to system). (Note:Make sure to put semicolon before pasting directory address.) the java compiler is located in “bin” folder of JDK, hence paste that path not just “C:jdk” in above case.

Step 2: Installing Eclipse

Download the eclipse IDE for Java Developers from the “Eclipse” Project website.[link:http://www.eclipse.org/downloads/] Download will be in “.zip” format.

Once the download is complete extract the contents of archive to a folder using archive manager such as WinZip, WinRAR, or other archive managers.

(Hint: Eclipse is very powerful IDE, and it has some concepts associated with it. Even though you can use eclipse without knowing this concepts but its advised to know these concepts so that you can take full advantage of the IDE.) Visit this link to know about Eclipse IDE (Eclipse IDE Tutorial). This link contains article from Lars Vogel – Eclipse and Android Evangelist.

Step 3: Installing SDK

Download the SDK from given download link [link:http://developer.android.com/sdk/index.html] Once downloaded run the installer. The installer checks for JRE version and after that it Prompts to select the install location.(Hint: Its better to choose custom location as the location must be given during the installation of ADT)

(Note: The SDK contains only “core” tools which can used to download rest of SDK Packages)

Once SDK is installed run the SDK Manager in “Administrative” Privileges by right clicking the “SDK Manager.exe” and choosing Run as Administrator. (Note: Windows XP Users need not worry about this, This is required for Vista and 7 users only) and proceed to next step.

Step 4: Installing Platform and Packages

Once the SDK Manager runs it checks various package download sites for System Images, Documentation, Sample Code, API’s. Download the packages you require. System Image is necessary to run AVD. API’s are required for developing apps which use those. (Ex:- Motorola API’s are useful if you develop apps exclusively targeted at Motorola Devices)(Note: I have downloaded all the components and API’s including Sample Codes, hence I have monstrous download size of 7GB). If you downloaded the files of Jelly Bean including the sample code the file size was around 1GB. Once the download is complete you can proceed to next step of Installing ADT.

Step 5: Installing ADT Plugin

ADT installation can be done in many ways. The different methods are

  • Adding a link of online repository to check.
  • Downloading the ZIP file of plugin and Updating Eclipse
  • Copying individual “.jar” files from archive and pasting it in “dropins” folder.
Copying method

Copy the .jar files available in the downloaded ADT_____.zip file and paste them in dropins folder found in extracted eclipse folder(shown in diagram).

eclipse drop ins

After copying run eclipse.exe, which will automatically install them.

Online Repository Method

We can add the add the link of Online Repository in Help->Add New Software. Following images shows them step by step.

Eclipse Step1 IDE install

this is step 1.

Eclipse Step2 IDE install

Then in dialog box click on add as shown by arrow.

Eclipse Step3 IDE install

  • Then in location paste the following URL.

  • After pasting click on OK. and Eclipse will download and installs it and automatically restarts.

Instead of pasting location click on “Archive” and browse for the ADT plugin location and open the file. After opening the file following dialog is displayed.

Eclipse Step4a IDE install

make sure the option contact all update sites during install to find required software is selected. and click on next.

  • On next dialog click on “select all” and hit next.
  • Then it presents you with licence agreement, click on radio button of “i agree” and hit finish.
  • while installing it may show some warning regarding “software not authentic” just click on OK.
  • After installing the plugins, eclipse will automatically restarts. (If it doesn’t do it yourself)

You are reading an Article by Harsha Ankola, originally posted on Harsha’s Tech Space. If you have enjoyed this post, be sure to follow Harsha on Twitter, Facebook and Google+.

© 2017 Harry's Tech Space

Theme by Anders NorenUp ↑