For any instrument safety is determined by knowledge of underlying risk.
-Pattu Sir, Freefincal
For any instrument safety is determined by knowledge of underlying risk.
-Pattu Sir, Freefincal
The headlamps of vehicles come in 2 varieties, one is reflector types and other one is projector type. In reflector type the reflector reflects light falling on it, by light source. The light source is infront of it, and reflector arrangement is nothing but concave mirrors sitting behind the light source. They reflect light falling onto road, giving illumination for the rider. In case of projector beams they instead of reflecting, concentrate the beam onto road. Here instead of mirror a lens is used. The lens makes it important for the light source to be behind it. These 2 are major lighting techs we have. Similar to these lighting types we do have portfolio types mimicking them. You can call these as diversified folio and other as concentrated folio.
Like the name suggests, In diversified type portfolio each stock you select has bearing on performance of the folio. Like reflectors which is nothing but set of mirrors aligned in particular fashion, each selected stock has its impact on folio. In reflectors placement of mirrors has bearing on where light will fall. Similarly individual weights matters in diversified folio. The diversified folio places too much emphasis on selecting a stock and weight assigned to it. If one is able to get good set of stocks and able to assign proper amount of weights to it, he will be able to get decent gains from this folio. Failing it – will make a subject of mimic in office. Most investors normally fall in this category, and fail it too. And by far this is very difficult thing to do also as one needs to keep a close eye on weights of all the stocks.
In this folio, you select the universe/market/index first. In a projection beam its the light source which is chosen first. Once the universe is chosen, you run a formula on it based on your criteria. In a projection beam where it is focused matters most. Similarly what kind of theme you are focusing matters most for the formula. If the lens is small, and shortlisting criteria is too tight, then lesser light is coming out, hence lesser stocks to invest is coming out. If lens is too large, then out coming light is spread too thinly. Similarly if formula is too lenient, it will throw out a large number of stocks, which wont have any meaningful concentration in portfolio performance.
A simple thumb rule to know folio type is, if you are able to define theme of a portfolio with ease, then the folio constructed by you is concentrated one. For example, if one is constructing folio on faster growth sectors of past 2 years then that list will not include HDFC Bank, as banking sector has lots of duds which bring its performance down compared to other 4 growth sectors namely FMCG, Auto, Pharma and IT (PS: Sectors referred here are lifted from Nifty Growth Sector Index, which is an index which focuses on fastest growing sectoral indices and picks top 4 sectors).
Here I have explained to you the 2 broad ways to select stocks. The thing with both is, they are mutually exclusive. If you are picking stocks individually then your folio is a diversified one, if you are using screener with a formula then your folio is a concentrated theme based folio. Failed diversified folio are called as Dhobi list folios. Failed concentrated folio ca be called as burnt theme. Hope you have done proper homework while investing otherwise you can share your investment experience on AIFW, serving as reminder of being a pig.
Financial Emergency is like a laddoo, if swallowed like a novice, its going to be your undoing. Thats why you normally get gyany responses like “bite only as much as you can chew” etc.. To get rid of half gyan you have accumulated through the years this post tells you “how to wolf down a laddoo”. The principles are applicable for financial emergencies too 😉 .
First, be aware of your limitations. If you dont have large mouth, then you cant wolf down a whole laddoo. Similarly if you dont have large emergency corpus, you cannot say “Its clobbering time!!!” to a financial emergency. There will be numerous thumb rules on this, Some will be stating 3 month income, some may tell based on health insurance covers etc.. After all people have multiple thumbs then their thumb rules is expected to be plural. Hence the factors to follow will be. Bigger the corpus is better. Life experience determines the size of corpus.
Your life experience determines the size of your emergency corpus.
Second, Swallow it whole if you can. If the laddoo is smaller than than area available in your mouth, you can swallow it whole, if bigger then you need to chew a part of it. One of the caveat here is that a laddoo cannot occupy the whole space available in your mouth, that would not leave any room for it to be crushed. Same way go for a credit if financial emergency is bigger than your emergency corpus, otherwise swallow it whole. The thing to be noted here is that, minimum balance restrictions should not get broken in process of doing so.
Third, Crush it. Many people often get into josh and leave the laddoo as it is, and try swallowing. A whole laddoo is bigger than food pipe hence it does cause trouble if left as it is. So go and crush it with impunity. Similarly a financial emergency should be crushed into smaller chunks.
Finally, slowly digest it over the course of time. Once laddoo is crushed in mouth you can slowly relish it. Similarly after financial emergency is over, the corpus too needs to be restored back into shape.
In the previous part of Website Loading series, I explained about application layer, the protocols that operate in it. That article told the basic grunt of work done by HTTP and DNS system. This article is to dispel doubts about whether the website loaded is secure or not. How will the website authenticate you?, Is your password under threat?, etc. All these things come under the domain of encryption and authentication.
Once a communication line is established with server during website loading, its necessary to ensure that communication line is not tampered with midway. This tunneling is ensured by using encryption.
While building communication with server – the client needs to prove his legitimacy. This proving of legitimacy is job of authentication. [Read More: Here is various authentication techniques explained.]
The security of system has 2 sources of attack hence 2 techniques are used. One source is false individual acting on your behalf (its impersonation at end points). Other one is person on network snooping on your communication (its eavesdropping). Encryption prevents eavesdropping. Authentication prevents impersonation. Lets get on with authentication first.
In case of real word, we use names to identify a person. Yet we often hear cases where a person misusing name to get his work done. In case of banks they go one step ahead and use signatures to identify individuals. Incase of computers there are 4 things used to identify individuals. (Read More: Authentication Techniques for mortals for knowing computer authentication models).
The computers use unique user names to identify individuals. Similarly we have names in real life to identify individuals. But username alone is not sufficient to identify as it can be misused. To help in this aspect passphrases were introduced. The passphrase is set when you first approach an organization to become its client. When you sign up for a service like mail, you are asked to set password, its asked to set a reasonable difficult password because, org doesn’t want your password to be easily guessable. The user name combined with passphrase identify an individual. If some one wants to impersonate you, he needs to identify your username and passphrase.
The system merges username and passwords together and creates a string containing both. and this string is then hashed i.e.1- way encrypted and sent to server. The server then matches this encrypted string with its own credentials generated during sign up time to log you in. Due to one way encryption (aka hashing) the server doesn’t know your password, hence it is safe from misuse by server administrator too.
The passphrases is called by various names like passwords, PINs, OTP’s, etc. Even the passphrases can be generated on the fly by RFID Cards, Finger Prints, retina prints too. In short its the passphrase which identifies you uniquely. This pass phrase can be something you remember(password, PIN), or something sent to you(OTPs), or some thing you have(Key Gen App, RFID Card), or something you are (your fingerprint or retina print). All these things passwords, pins, OTPs, fingerprints, keygen codes at the end gets converted into an alpha numeric string hence the term passphrase. This passphrase is compared by computer to identify.
But as we know there are really idiotic/gullible users who share their passphrases, to overcome their idiocy/gullible nature another layer of passphrase was added. This system of username and 2 passphrases is called as 2 factor authentication. Often times this 2nd layer password is freshly generated and sent to user like OTP’s. This 2nd layer works on “what you have” principle than “what you know” principle used often for passwords. Also a device/app is given to user to generate these passphrases too like Keygen. OTP and RNG grids are what you have things as they are on your device and freshly generated. Even RFID can be used for this purpose but RF readers are not prevalent.
In case of bio-metrics, your finger print is used to generate a random alpha numeric string which is matched with server to identify you. Since bio-metrics are unique to individuals, a separate user name is not required. The encrypted alpha numeric string is used as credentials on server whereas the bio-metric aspect be it fingerprint or retina print becomes unique user and password combo. The process of converting this bio-metric info to alpha numeric sting is equivalent to encryption, and another one way encryption of this string prevents misuse at server side. To impersonate bio metrics, one is supposed to have same fingerprint or retina print which is impossible.
In real world, we use coded language to pass on the secrets(often used by 11th and 12th std. boys). All these coded language are essentially a form of encryption. The role of encryption is to prevent the unauthorized person in the middle snooping on you. In case of code words, the words are pre decided by friends at college, and they use it whenever possible. But the code words would not be decipherable by nosy neighbor of yours because, he will not be knowing the hidden meaning of them. Encryption also does the same. It converts your information into gibberish string, which can be decoded only by the intended recipient.
In encryption there are 3 types of it. Symmetric, Asymmetric and Hashing. The coded language in above example was of symmetric encryption.
In symmetric key encryption, the encryption and decryption happen based on inputting a passphrase. In case of encrypting hard drives, a encryption algorithm asks for a password to be set while encrypting the drive. When the drive has been encrypted, same password is needed to decrypt the drive. Since the keys used for encryption and decryption are same, its called as symmetric key encryption.
Even case of coded language in above mentioned example, the code words are established by friends. Hence decoding of them is by friends only. Third party doesn’t know the code, hence unable to decode the meaning. This type of encryption is used to encrypt your mobile contents.
The main drawback with Symmetric Key encryption is, compromise of the passphrase, compromises you. Once passphrase is known, anybody can decrypt and view your content. For this reason, symmetric key encryption is not used for securing web communication, but used for securing your device. For web another tech was created to over come this problem, It was called public key encryption.
To overcome the problem of key getting compromised. This dual key encryption was created. Here the keys required for encrypting and decryption are different. The server will send the public key to client. Client encrypts the content via public key. Then sends the ciphertext (the security parlance name for encrypted content) to server. Then server uses its private key to decrypt the ciphertext and retrieve the decrypted plaintext message. Since 2 keys are used its called as public key encryption.
With public key encryption one may feel secure, but this method is vulnerable to man in the middle attacks. In this attack an attacker would keep the public key sent by server to himself and send you a fake public key to you. You will send the message to attacker, thinking him to be safe server. The attacker will now have your credentials data to compromise you. To over come the problem, a reverse version of the same public key encryption is used.
In the reverse version of public key encryption, the server sends its public key generated by it as well as certificate containing public key issued by certifying authority. You receive both together. Once it reaches you. The private key you already have (this key is given to you along with your operating system) is used to verify the certificate. Once the certificate is verified to be genuine, its validity period is matched with your computer’s date ( a warning is shown if your computers date is wrong as it fails the matching). After it only Website Loading works continue.
Many purists dont consider hashing as part of encryption. In hashing a variable length string is taken and mapped to a fixed length string. The specialty of this technique is, mapped string called as hash is totally different even if you change 1 character. So a thief cannot guess the username password combo by going through the hash.(PS: some have done so already with weaker hashes) As it was referenced earlier in password section, the hashes are stored on server to authenticate you. and hash is sent to server by using public key encryption. In case of online storage services like dropbox, its this kind of hash used to encrypt the contents you store on their service with symmetric key encryption.
Here are some myth busters:
If this has aroused curiosity, dont hesitate to do a coursera course called “Internet History, Technology, Security” which digs deeper in this field.
Tax Saving is icing on the cake, not the cake in itself.
-Ashal Jauhari (Asan Ideas for Wealth)
Whenever a person types in
www.google.com in his address bar, behind the scene lots of works happen to load the website of Google. The the very act of website loading requires proper functioning of various elements of technology stack. There is DNS System helping to connect with the server. one needs to know about lots of lower level protocols to actually transmit the data. Also one needs to be mindful of downloading the images and all required assets for proper website loading.
Since the internet was a very complex project, it was split into independent layers to help technologists build various complex aspects of it. These layers combined together is called as “Internet Protocol Stack“. The protocol is just a set of rules, which needs to be followed by the software implementing it. The top layer protocols work independently of bottom layer protocols. All the layers are given a predefined responsibilities to perform. The various layer of stack and their responsibilities are listed below.
These are 4 layers of TCP/IP Stack.
The world wide web is built on protocol called HTTP which stands for Hyper Text Transfer Protocol. Thats the main reason why websites show
http:// in the beginning. The HTTP is application layer protocol designed to send HTML (Hyper Text Markup Language) documents which display a web page. Computers which understand the HTTP requests are called as Servers. Client is the computer, which requested the HTML resource by sending HTTP request. Browser is the program which interprets the HTML doc and displays it. The URL(Uniform Resource Locator) is addressing scheme used to identify web resource.
When Sir Tim Berners Lee introduced web for the first time, he designed all the components of ecosystem. They are – browser program, server program, HTTP protocol, HTML mark up language, URL addressing scheme. Below is some facts about the WWW ecosystem.
When you type the site name in browser’s address bar, the browser first establishes connection with the server. The Server Address is obtained by querying the DNS. Destination Server address obtained via DNS is then embedded in transport layer’s destination address field. The HTTP request is prepared and given to transport layer in data field. (Note: HTTP uses Transport Layer protocol called TCP – Transmission Control Protocol for its communications.)
The HTTP request consists of 3 main sections at the top request line. The request line is like this.
<Method> <URL Path> <HTTP Version>
Ex: GET /index.html HTTP/1.1
The GET is request to server requesting it to give it give resource identified at given path. and HTTP Version its using. Below the request line other additional parameters are sent. These additional parameters are called as Header fields. Some header fields are mandatory and others are optional. (Refer to this wiki for details on header fields.) One has to note that Browser type is also one of the header field called with name
Once the query is made to server, the server searches in its resource pool and gives the response. Like request the HTTP response also starts off with status line. Below status line the usual headers follow. One has to note that Server also identifies itself in a header field called
server:. After a blank line the response body begins containing HTML code.
The HTML response like request has 3 main sections in its status line. The status line is like this.
<HTTP Version> <Status Code> <Response Phrase>
Ex: HTTP/1.1 200 OK
The headers follow the status line followed by body containing resource requested. The status codes are subdivided various series.( Refer to this wiki article for list of all the status codes.)Remember that 400 series status are because of client i.e. browser made mistakes. 500 series errors are because of server problems. 300 series requires client browser to take additional actions. The famous 404 error means client requested resource which doesn’t exist, hence its client side mistake. Error 500 which is bloggers like me encounter a lot, means server has gone kaput for some sort of mis-configuration, means mistake is at server side.
The work of DNS is to fetch the IP address of the server, only after this browser can continue its website loading works. (Note: DNS uses Transport layer protocol called UDP – User Datagram Protocol for communications.)
Whenever you browse a website, its IP address (aka A record) is stored by your operating system in DNS cache for later use. When a website’s IP address / A record is not available in DNS cache of the OS, a DNS query is automatically sent to your ISP’s(Internet Service Provider) Recursive Resolver. If recursive resolver doesn’t has A record (PS: often times it has) it keeps you waiting and asks the Root Nameserver for it.(PS: there are only 13 root nameservers. They have links to all the TLD’s) Root nameserver forwards the query to appropriate TLD nameservers. (E.g. query to www.google.com will get forwarded to .com TLD nameserver.) The TLD nameserver forwards the query to authoritative nameserver which gives the A record. Once recursive resolver fetches the A record, it keeps a copy with it and sends the record to you.
The above mentioned steps are done during website loading. The activities of all these protocols is done at application layer, which sits atop Transmission, Internet and Link layers which in turn do lot more work to keep the internet running. So its worth while to consider the WWW as a public web with decent gentlemen doing the background work. If you have noted the header fields, servers do have lots of information to identify a computer. Its because of that efficient communications happen. If you want to take cue about privacy from above explanation of headers, understand that WWW is public. Only thing stored in your computer or encrypted content is private.
Many times traders used to tell their process of setting up the trade as scientific and blah blah blah. But to call something as scientific, every technique has to go through a proper scientific research process. It also has to stand its ground to rigorous review by a tough human being and also the process is subjected to tough acid bath of statistics. The scientific research process unlike an algorithm of computers is a linear one. Here is steps to follow in scientific research process.
The above list is how a research happens. These steps can also be seen in different mythological context too. The 6 steps corresponds to What How and Why’s of a thing (Also read: what how and why framework here.).
The first step in research is definition of the problem. The problem is nothing but an telling of whats really happening in the environment. Problem definition can be like “Why sales of sunfeast biscuits is so low in city of Dharwad?”. The problem can also be “how to measure oversoldness of a stock” etc. In short whatever the so call pundits call as scientific can be considered as problem definition. The key criteria of good problem definition is the problem should entirely based on subject and should not have any reference to statistics.
The review of previous works sounds more odd as many feel that their works is unique and will not be covered by previous researches. When a review is done it sheds light on variables and the interplay between them which can also have bearings on your research. Also the review sheds light on biases that may creep in. If you haven’t read on efficient market hypothesis and started off with your research on trading techniques, its expected to be one sure fire biased work. Its the review works which can differentiate a good researcher from a bad one. Review also helps in understanding the subject properly, and help in forming hypothesis (its obtained by subdividing the problem statement into small measurable chunks).
In this phase we set up the research process apparatus i.e. deciding on things to measure, how to measure etc…
First step of it – the problem statement is subdivided into chunks called hypothesis. The hypothesis is a set of 2 statements which describe small set of problem . One of the hypothesis is called as Null Hypothesis and other one is called Alternate Hypothesis. Null hypothesis states that there is no relation between the variables, where as alternate states there is relation between the variables. And every hypothesis is made of variables which are measured to know the validity of the which of the hypothesis holds true. The main feature of hypothesis is that only one of them can be true.
Apart from preparing hypothesis you are also supposed to identify the proper audience also called as sample. (sample is the set of people on whom the research is done, Sample is selected from population hence its subset of population.) The process of selecting sample is called sampling and is decided in this step itself.
You must also decide how go about collecting data without letting bias to creep in. You are segment sample to allow pure randomness, so that biases due to concentration doesn’t creep in while collecting data. Bias can also question the validity of research hence one needs to be careful about it. You can call this step of sampling, and prep work as preparation of statistical model, as it lays out model for data collection. Only after proper model is setup one can go about data collection.
(Know more on statistical hypothesis testing on wikipedia, it explains about model too)
Data collection is the smallest step of the research process. Here the research is conducted and data gathered. In case of research related humanities subject often times the questionnaire is submitted to people to answer. There are other types of experiments to collect data like focus groups, blind tests etc. for human participants. For non humanities subjects like physics etc, the research is ran on machines and data is captured. The machine on which experiment is conducted is part of experimental set up. The data collected from it is called as sample. The data analysis step follows the data collection.
This is the step where acid bath of statistics happens. This is one of longest and important step of scientific research process. Here the statistical analysis of codified data happens. Based on the statistical analysis the results are published.
Once the data is collated from questionnaire, it needs to cleaned and be made machine ready. For example if questionnaire had rating scales then the answer would be like strongly agree to strongly disagree. These kind of likert scale answers cannot be fed into machines directly hence they need to codified like strongly agree = 5 and strongly disagree = 1. (One has to note that the coding strategy is pre-decided in research set up phase, here only its implemented based on it.)
After the coding and data entry is over Software like Matlab, SPSS, SAS are are used by researchers to run the various statistical analysis on the data. Things regression testing, factor analysis are done in this phase to validate hypothesis. The analysis is called as hypothesis testing, since these analyses are done to validate which hypothesis is true. Based the result spewed out by the software which hypothesis is valid is determined.
The final step is of publishing the results and its subsequent review by an expert. Based on the experiment conducted the valid hypothesis is collated and result is published. If one can recall, the data collection of gravity waves ended way back in September of 2015 still it took a lot of time to run the analysis and publish the results. Once the results are published it has to go through a panel which vets whether the research was done in unbiased way, once its fairly confident of absence of biases then the research is published.
There are some caveats in this scientific research process. In the above mentioned scientific research process, the researcher does the experiment to confirm his gut feeling. For example the discovery of gravitational waves was to confirm whether the gravitational waves exist or not.This kind of confirmation of gut feeling is called as confirmation research. There also another branch of research called as exploratory research, it follows a different scientific research process but instead of hypothesis testing it just measures the variables and tries to build relation ship between the variables. This research falls under the realm of big data.
The above said steps describe the scientific research process. All the statisticians in a company follow the above mentioned steps when they do their market research or other kinds of R&D works in a company. To know more the above mentioned scientific research process in depth you can read a book by Thomas Davenport called as “Keeping Up With Quants”.
Here is amazon links to its hardcover and kindle editions.
If you are trader then remember that trading is art, its not science. Its so because the first rule of trading is “Be Flexible” and science is never flexible. Since the above steps are very laborious, dont call every one of your gut feeling as scientific, you can call your project in MBA as scientific though. ;).
Many times you would have come across personal finance acronyms on leading personal finance group on Facebook “Asan Ideas of Wealth” and couldn’t make head or tail of it. Here is list of personal finance acronyms/catch words and their expansion/meaning.
There are also some industry standard terms, for that you can go through Getting Started with Mutual Funds to get birds eye view of mutual fund industry.
Websites are now center pieces for every businesses and bloggers. News channels splatter ads that tell you to have website of your own business. The website is the face of your company, and plays a vital role in building trust. Since it is closely linked to your identity, its imperative you know the levers that control your website. Having access to all the levers help you in managing your website.These are the things which every website owner must have access to.
Websites, like a house – has a name (domain name), a place (hosting) and things that make it up (content). Domain name is name people type in their browser to reach you. Browser then tries to ascertain your location to establish the contact. Once contact is established then the contents are loaded in browser. But majority focus only on content and loose sight of other things resulting 1 entity having too many names littered across, or zombie domains like problems. Hence lets jump in and understand the various levers that control your website.
The Domain Name is name of your website. If you see this websites name
www.harshankola.in – that is the domain name of this site. The “.in” is considered as TLD aka Top Level Domain. There are lots of varieties of TLD’s like .com, .org, .gov, .gov.in, .pk, .cn, .ac.in, .net, .co.in etc… The middle part is called domain and beginning
www is called subdomain. If you have domain name as
techblog.wordpress.com the TLD will be “
.com“, the domain will be “
wordpress” and subdomain will be “
To register your domain name(ex: google.com, facebook.com) you have to approach domain registrar like Big Rock, Go Daddy etc… The domain names are unique to websites hence while registering the registrar first searches whether the domain name chosen by you is available and not taken by other. For example If you try to register wordpress.com, or google.com facebook.com it will not be available as they are already taken and hence you will not be able register them. if a domain available then only you can proceed to register and buy it. But please do note The domain registrars are guys who are allowed to sell domains not subdomains. The subdomains are given out by owner of that domain. For example, if you want to register
techblog.wordpress.com you need approach “
wordpress” to licence the subdomain “
techblog” not a domain registrar.
The domain registrar actually rents you domain name for a particular time period to prevent domain squatting. So you need to keep renewing your domain name regularly. Once you register your domain, you need to make sure that domain name actually points to place where your content is hosted. If you had CDN(Content Delivery Network) then you need change things like Nameserver. If you wanted forward the domain to your blogspot or wordpress blog, it requires to changes to DNS records of CNAME, A record etc.. To do all these things related to DNS Server you need access to account you had created with Domain Registrar. So this account of domain registrar is First lever that you need to control your website’s functions. its important you have access to this. Even if you hire new guy to manage your domain, he has to manage via this account only, hence have access to Domain Registrar’s Account.
The hosting service gives you a space to host your website. whenever a person types a name of website, the browser first contacts domain name server to find out hosting server address. Then the browser contacts hosting server, on successful connection it begins loading content from this server.
In hosting there are various types of it namely shared hosting, virtual private server hosting and dedicated server hosting to cater various requirements of the users. In case of shared hosting all the websites hosted by the service provider are stored on same disk and this is cheapest form of hosting too. In case of virtual private server you get access to a virtual machine running on hosting server and this slightly costly as the virtual machine gives you more freedom to do things. In case of dedicated server you are given a full fledged computer to host hence its very expensive. While selecting hosting plan its also necessary to know max bandwidth and max space allowed for your site. If your website doesn’t plans to have blogs then space required will be least.If many people will visit your site then bandwidth requirement will be higher. Hosting services also provide e-mail services and other features. Creation and management of subdomains are all jobs hosting service itself. The type of operating system on hosting server also matters for the content to be hosted. Once you have made up your mind on hosting plan you purchase for the requisite time period. Like domain name the hosting space is also leased to you for a particular duration.
To give a single point of access to all the services, the hosting service provides you a control panel (cPanel is most famous one provided by many hosting services, some hosting services give their own inhouse versions). If you want to upload contents to your site the FTP server address are all available through hosting control panel only. To manage various services provided hosting service such as emails, subdomains, FTP accounts, you need to have credentials of Hosting Service’s control Panel account. So this cPanel’s account is the second lever that provides you access to tools to control your website’s hosting functions. To add new email or add new subdomain or let a new developer build your site from scratch, you need this cPanel account.
The contents are the things that are displayed to your users. Once a link to hosting server is established, the server software of host starts providing your content to browser. If the content is just a html file then server sends it directly to browser, otherwise it processes the file and sends appropriate file to browser.
There are lots of technologies to display the contents. On most simple side of spectrum is HTML pages. HTML page is static page and doesn’t change. But there are active dynamic pages built from technologies like ASP, JSP & PHP. One can also use content management systems like WordPress, Joomla, Drupal to build their websites too. The most important constraint with content is it must load fast. Its also necessary to know the various hooks and straws of content systems(Also Read: How I build SKDRDP site from scratch). Too big images and resource files have power to slow your site down. If some data about your business has changed then you will need to change the content. If the content is in HTML then you need a full time developer to manage and control your website. If your site has content management system then you need to learn its how to.
Now a days majority of websites have content management systems as it make easier to focus on content and forget various technicalities of running website. The CMS makes it easier to add new content and also makes it easier to expand its own functionalities by ways of plugins. Since the benefits of CMS are obvious, its important to have access to CMS’s administrator account. So this CMS’s account is the third lever that control your website. To make any changes to content you need this CMS’s administrator account.
These are the various levers that control your website. Please do have access to all of these to avoid future catastrophe with your website.
I started off my journey into equities by directly plunging into direct equity. It started off with random stock picking and hoping for miracle to make money (not the recommended way). One day I got a mail from my broker where they told about various investment products one of which was mutual funds. Luckily on the same day Uma Shashikant madam shared article by Pattu sir which was also on selecting mutual fund. That roused my interest in mutual funds as it was more easy money – just pick the right one and be done with it.
There are various ways to shortlisting and selecting mutual fund. I used pattu sir’s guide to shortlist the funds(here is the link to it). While I was selecting mutual funds, I had no clue about ABC’s of these ratios, only gut feeling was the guide.
Here is screen shot of how I shortlisted the balanced funds.
The shortlisting of fund to invest happened on Value Research Online. Since I was not convinced of efficacy and workings of star ratings, I didn’t bother to look at them. I started off compiling all these data points in excel. After collecting the data selecting mutual fund out of it was breeze.
The thumb rules I followed were
These thumb rules are sufficient to pick a fund of choice but blind follow of them is certainly not going to make you good picker. So here is small explanation of what these risk measures mean.
Fund's Alpha = Fund's Average Returns - Benchmark's Average Returns
Do go through this article by pattu sir which visualizes various mutual fund risk measures.[Visualizing Mutual Fund Volatility Measures]
While browsing my Google Now feed I came across an article by pattu sir for Index funds. The indices mentioned there captured my attention and It sparked off curiosity in knowing more about those indices. I searched about those indices in various places places like a boy searches for a girl. The search throw up more interesting results about the index with its mind blowing outperformance of Nifty 50. The more I dug deeper more interesting it began to get. Read on my 2 month journey into Nifty Alpha 50.
Though the spirit was kindred by Pattu sir, the official journey began with a google search on CNX alpha stocks (now its known as Nifty Alpha 50). The search landed on Nifty’s Strategy Indices page, I immediately downloaded the list of stocks, methodology and fact-sheet of that index. In this phase I was in data gathering mode. I collected all that info I could.
After it, I tried to understand the methodology of preparing the Nifty Alpha 50 index, The fact-sheet of that Nifty Alpha 50 index. Initially it all sounded Greek to me. I started off asking questions in AIFW if I didn’t understand. Questions were technical like how Beta gets calculated?, what is R^2? etc.
Once I had a firm grip on the subject I jumped in with smaller purchases. I kept a close eye on all the announcements of NSE, I still visit NSE’s Press Releases section even now. The keen focus on press releases helped me to know the stocks that would get booted out of index bit early(Ex: The October month press release stated Gujrat Pipav Port, Motherson Sumi and 8 other would be booted out of index by 25 Oct and I could prepare for that to be done in my tracker too).Once I got comfortable with Index I started off with smaller purchases. Even though the the index had bigger stocks like MRF, Page & Eicher, I didn’t backed off I moved on. My initial plan was purchase 1 shares of all the constituents first and then start balancing it.
The nifty alpha 50 is the fastest growing index among the NSE Indices. Being the fastest growing index I had to challenge myself to increase my knowledge along side this index. I set myself on the knowledge growth by going through each concept about the index and trying to understand what it was, how it was done and to some extent why it was done the way it was done(You can also check out what? how? why? framework I use).
I started off to learn the various concepts like free float market cap, R^2, Beta on the old trusted guard investopedia. I did asked questions on statistical analysis in AIFW too. The other source where I learned things was NSE’s Index Concepts section. I did learn some things practically too specially the concept of Bid Ask Spread when trying to trade.
The practical learning of Bid Ask Spread was revealing as it shed light on costs I would incur for bigger purchases of stocks like MRF, Page and Eicher. Trading costs estimates of this index was higher and I decided to hop onto discount broker. This decision cooled of my costs significantly. If I had pay 0.5% of 40,000 as brokerage, it would come to a whopping brokerage of 200 rupees. So the cost cooling did gave a huge benefit for me. Apart from costs the price of acquisition also played vital role.
To further optimize my entry into stock I started off with technical analysis.I learned the technical analysis on investopedia. The core reason why I resorted to technical analysis was to get the entry into a stock correct. I learned out various charting and oscillators like RSI. This focus on technicals helped me immensely in not paying too high price for a stock.
These various connections of technical analysis, managing costs, understanding of markets increased my mental connections. This interlinking of concepts is like drawing a rangoli in my mind. By learning new things the rangoli expands from 3 x 3 into 4 x 4 and higher. Also my focus on this index is not shutting off my investments in Mutual Funds at all.The SIP will run their due course. To me this tracking Nifty Alpha 50 is like journey, when I reach the same point as I started, I will be much wiser from the experience. Even though the sum total of this exercise is zero.