How to Scrape Zerodha Kite Real-Time Stock Prices Using Python?


Discover how to use Python to extract stock price data from Zerodha Kite website software. This project will guide you in building a utility code to download stock price data in CSV format. Follow the step-by-step process to initiate your journey!

The Objective of Zerodha Kite Data Scraping


Designing a trading system requires stock price data of various timeframes in stock trading. An "Intraday Strategy Backtest" data in 1-minute to 15-minute timeframes is essential. However, such low-frequency data might not be readily available to students or learners for educational purposes. To address this challenge, a small utility code is developed that helps scraping stock price data from Zerodha Kite. The objective is to facilitate access to educational stock data for students and learners, making it easier to analyze and test trading strategies. This project is a valuable resource for those interested in learning about stock trading and strategy backtesting, enhancing their understanding of the financial markets. Here, we will provide a detailed insight into the steps involved to scrape Zerodha Kite real-time stock prices using Python.


Web scraping Zerodha Kite is an automated technique for extracting large amounts of data from websites using software or scripts. Typically, it involves obtaining unstructured HTML data, converted into structured data, and stored in spreadsheets or databases. To prepare this unstructured data for analysis processes like data cleaning and formatting are applied. Various libraries like pandas, numpy, datetime, requests, and json help facilitate data cleaning and structuring tasks. These essential steps ensure the scraped data becomes valuable and ready for further analysis or integration into various applications.

List of Data Fields


Following list of data fields are available from extracting Zerodha Kite web software.

  • Current Price
  • Volume
  • Open Interest
  • Symbol
  • Last Traded Price
  • Bid and Ask
  • Market Depth
  • Time and Sales

Steps to Follow

To perform web scraping from "," follow these steps:

Read login credentials from a local text file for automated login.

  • To scrape stock pricing data, set up a Selenium driver for automatic login to the website.
  • Read the stock dictionary from a text file & store it in a different variable.
  • Generate a data URL and extract the authentication code from cookies.
  • Pass the data URL and authentication code to the requests method.
  • Convert the response of the requests method to JSON and create a pandas DataFrame.
  • Perform data cleaning using the panda's library and time module.
  • Store the DataFrame in CSV format in the desired directory.
  • Repeat all steps for the entire stock list.

These steps will automate the process of scraping data from the specified website and allow you to store the relevant information for further analysis or use.


Automated Zerodha Kite Login: Reading Credentials Safely

  • The sensitive credentials are securely stored in 'credential.txt' on the local computer to ensure privacy.
  • The Firefox Selenium Webdriver, 'getkodriver,' is downloaded and saved in the project's directory.
  • The browser version is verified to ensure compatibility with the downloaded driver.
  • The "stock_dictionary" file contains the list of stocks for scraped price data.
  • A function is available to read and retrieve all credentials from the text file, returning them as a list for further use.

Set up the Firefox WebDriver to load the Kite login URL and enable autologin.

Create the "load_kite_get_cookie()" function, which will receive a list containing credential information obtained from the "get_credential()" function.

The function will return the URL cookies, which are essential for extracting the authentication code later.

Use Selenium WebDriver to load the URL and automatically perform autologin on Zerodha Kite.

Inspect the login page for auto-login, find the relevant tag, such as "user_id," and pass this information to the Selenium driver.

Similarly, obtain the class information for the login button to facilitate the autologin process.


Parsing URL Response to JSON

  • Create a function to evaluate the stock dictionary from the local drive and return it as a dictionary.
  • For generating the data URL, pass essential details like stock code, user_id, and start and end dates to the function "data_url." It allows standard URL structure for multiple stocks.
  • Use a "for loop" to create separate data URLs for each stock, utilizing the information passed as parameters to the function.
  • Obtain the authentication code from the request header by passing the cookies as a parameter to the "get_authorization_code()" function.
  • Check the response status; if it is 200, the authentication is successful.
  • Convert the response to JSON and parse it into a dictionary using the "get_json_response()" function. It ensures data is available in a convenient dictionary format for further processing.
Parsing-URL-Response-to-JSON Parsing-URL-Response-to-JSON-2 Parsing-URL-Response-to-JSON-3 Parsing-URL-Response-to-JSON-4

Storing JSON Response in Pandas DataFrame

  • The "get_json_response" function will provide stock price data in dictionary format.
  • To save data as individual stock data CSV, the "get_data_df()" function will clean and format the data. It will change the column names to "Date, "Time," "Open," "High," "Low," "Close," and "Volume," and set the required index.
  • The deleted unnecessary columns and the cleaned data will be returned as a pandas DataFrame.
  • The "get_path()" function will provide the directory path to store each stock's CSV files holding OHLCV data.
Storing-JSON-Response-in-Pandas-DataFrame Storing-JSON-Response-in-Pandas-DataFrame-2

The process involves web scraping for stock prices in Python from the provided list.

Combining all functions into a single comprehensive function, which takes the parameters start_date, end_date, and timestamp. This function will perform data scraping for all stocks, and the save the collected data in the designated directory as individual CSV files.


Execute the code to scrape stock data and save it as CSV files.


For further details, contact iWeb Data Scraping now! You can also reach us for all your web scraping service and mobile app data scraping needs

Let’s Discuss Your Project