• Website
  • Server status
  • API documentation
  • Blog
Telegram Icon Community
EN
English
Português
Русский
中文 (中国)
Tiếng Việt
Log in Try for €1.99
  • Website
  • Server status
  • API documentation
  • Blog
  • Telegram Icon Community
  • English (US)
    English
    Português
    Русский
    中文 (中国)
    Tiếng Việt
Log in View Plans

Custom Python scripts

Build powerful automation workflows with Python and the Multilogin API. Create, manage, and launch browser profiles with fully customizable scripts.

search icon

Contact Us

If you still have questions or prefer to get help directly from an agent, please submit a request.
We’ll get back to you as soon as possible.

Please fill out the contact form below and we will reply as soon as possible.

  • Getting started with Multilogin X automation
  • Basic automation with CLI
  • Low-code automation with Postman
  • Script runner & predefined scripts
  • Puppeteer, Selenium, and Playwright
  • Custom Python scripts
  • Quick solutions with Developer Tools
  • External automation tools
  • Home
  • breadcrumb separator bar
  • Multilogin X (latest)
  • breadcrumb separator bar
  • Efficient task automation with API
  • breadcrumb separator bar
  • Custom Python scripts
  • breadcrumb separator bar
  • Web scraping with Selenium 101

Web scraping with Selenium 101

Written by Anton L ( Updated on December 2nd, 2025 )

Updated on December 2nd, 2025

Web scraping is a process of fetching data from various pages in Internet. If you want to know how to start scraping with Multilogin profiles – follow this guide, and you will learn how to do a simple script!

This article is meant for creating your script step-by-step. If you want to use the full script for reference – feel free to scroll to the end.

 

Step 1: prepare IDE or similar software

You'll need anything to write your script. It's up to you what to use, but we recommend to use IDE for that. Follow the first 4 steps from the following article: Getting started with automation scripting.

Step 2: create the script connecting to API and define functions

In this step, you'll need to make the script work with API. The script will include:

  • API endpoints
  • Variables for credentials
  • Defined functions for sign in, opening and closing profile
  • Imported modules, including requests, hashlib, time. Some modules related to Selenium will be included as well
  • Sign in request

Use the following template for it:

import requests
import hashlib
import time
from selenium import webdriver
from selenium.webdriver.chromium.options import ChromiumOptions
from selenium.webdriver.firefox.options import Options
from selenium.webdriver.common.by import By

MLX_BASE = "https://api.multilogin.com"
MLX_LAUNCHER = "https://launcher.mlx.yt:45001/api/v1"
MLX_LAUNCHER_V2 = (
    "https://launcher.mlx.yt:45001/api/v2"  # recommended for launching profiles
)
LOCALHOST = "http://127.0.0.1"
HEADERS = {"Accept": "application/json", "Content-Type": "application/json"}
# TODO: Insert your account information in both variables below
USERNAME = ""
PASSWORD = ""
# TODO: Insert the Folder ID and the Profile ID below
FOLDER_ID = ""
PROFILE_ID = ""


def signin() -> str:
    payload = {
        "email": USERNAME,
        "password": hashlib.md5(PASSWORD.encode()).hexdigest(),
    }
    r = requests.post(f"{MLX_BASE}/user/signin", json=payload)
    if r.status_code != 200:
        print(f"\nError during login: {r.text}\n")
    else:
        response = r.json()["data"]
    token = response["token"]
    return token


def start_profile() -> webdriver:
    r = requests.get(
        f"{MLX_LAUNCHER_V2}/profile/f/{FOLDER_ID}/p/{PROFILE_ID}/start?automation_type=selenium",
        headers=HEADERS,
    )
    response = r.json()
    if r.status_code != 200:
        print(f"\nError while starting profile: {r.text}\n")
    else:
        print(f"\nProfile {PROFILE_ID} started.\n")
    selenium_port = response["data"]["port"]
    driver = webdriver.Remote(
        command_executor=f"{LOCALHOST}:{selenium_port}", options=ChromiumOptions()
    )
    # For Stealthfox profiles use: options=Options()
    # For Mimic profiles use: options=ChromiumOptions()
    return driver


def stop_profile() -> None:
    r = requests.get(f"{MLX_LAUNCHER}/profile/stop/p/{PROFILE_ID}", headers=HEADERS)
    if r.status_code != 200:
        print(f"\nError while stopping profile: {r.text}\n")
    else:
        print(f"\nProfile {PROFILE_ID} stopped.\n")

token = signin()
HEADERS.update({"Authorization": f"Bearer {token}"})

The template is similar to Selenium automation example, except it has the following imported module at the beginning (we will need for scraping):

from selenium.webdriver.common.by import By

 

Step 3: choose web page to scrape data from

You can use any website that contains text, but for this guide, we recommend trying this page – it’s great for practicing automation tasks: Large & Deep DOM.

Step 4: look for the target info

In our case, it will be the data from the table below:

The Internet 2025-12-01 at 3.30.02 PM

We'll get all the values from the table. Here's what you can do:

  1. Open DevTools in your browser. Here's how to do that for Chromium- and Firefox-based browsers:
    1. Windows and Linux: press Ctrl + Shift + I
    2. macOS: press Cmd + Option + I
  2. Make sure that you are on the “Elements” tab
  3. Use search hot key to find the target value
    1. Windows and Linux: CTRL + F
    2. macOS: Cmd + F
  4. Type the text value you want to see. In our case, it is “Table”
  5. Look for the value you will need to use for scraping. In our case, it will be the following: <table id="large-table"> 
  6. Hover the element with the tags in the “Elements” tab
  7. Right click and then left click “Copy” – “Copy selector”
  8. Write down the value somewhere – you'll need it later
Zight 2025-12-01 at 21.57.10

Step 5: get back to IDE and add new strings of code

  1. Get back to the IDE of your choice (for example, VS Code)
  2. Click the code field and add a variable for opening and performing actions in the profile: driver = start_profile()
  3. Add driver.get(“<your website>”). In our case, it'll be the following command: 
    driver.get("https://the-internet.herokuapp.com/large")
  4. Now we need some delay for the script, so it will try to do the other commands 5 seconds after opening the web page: time.sleep(5)

Step 6: make the script to find the element

Use this command to find the element: driver.find_element(By.<attribute on the page>, "<element>"). It tells the script exactly what to look for on the page. Since we copied the CSS selector in step 4, your actual command will look like this:

driver.find_element(By.ID, "large-table")

We'll need to get its value later, so we need to make a variable for the command, for example, fetch:

fetch = driver.find_element(By.ID, "large-table")

Step 7: print the end result and stop the profile

  1. Use print() function to print the end result. As we need to extract text value, we need to get the text from our variable. The result will be the following:
    print(fetch.text)
  2. Add the function to stop the profile at the end:
    stop_profile()
  3. Save the .py script, you'll need to run after some extra steps

Step 8: prepare the script before running it

  1. Install the following Python libraries (look for documentation of your IDE for more details):
    1. requests
    2. selenium
  2. Insert your values into the below variables in the script:
    1. USERNAME: your Multilogin X account email
    2. PASSWORD: your Multilogin X account password (MD5 encryption is not required)
    3. FOLDER_ID, PROFILE_ID: find these values using our guides on DevTools or Postman

Step 9: run the script

  1. Open the desktop app (or connect the agent if you are using the web interface)
  2. By default, the script below works for Mimic. To use it for Stealthfox, replace options=ChromiumOptions() with options=Options() in the following line:
    driver = webdriver.Remote(command_executor=f'{LOCALHOST}:{selenium_port}', options=ChromiumOptions()) 
  3. Run the .py file with your automation code 

In order to run the script in VS Code, сlick “Run” → “Run without debugging” (or “Start debugging”).

 

If you've done everything correctly, you'll be able to see the result in the terminal.
fetch.py 2025-12-02 at 4.39.25 PM

Notes

Congratulations with your first scraping script! You are not restricted to those only options. Python and Selenium are quite flexible tools, and there is more potential to them. Here are a couple of tips:

  • If you need to fetch several values by similar values (for example, IDs), you can use the following function:
    driver.find_elements(By.<attribute on the page>, "<element>")
  • You can add several values to the print() function. You can find more info about it in Internet. For example, you can add the text before the fetch.text. It will make the printing result more readable, and it can be also useful for debugging the script. Here is the example which you can test out in the script:
    print("Your values: ", fetch.text)
  • There are more ways of implementing Selenium. Check their help center for more details: Selenium Documentation 

Full script

import requests
import hashlib
import time
from selenium import webdriver
from selenium.webdriver.chromium.options import ChromiumOptions
from selenium.webdriver.firefox.options import Options
from selenium.webdriver.common.by import By

MLX_BASE = "https://api.multilogin.com"
MLX_LAUNCHER = "https://launcher.mlx.yt:45001/api/v1"
MLX_LAUNCHER_V2 = (
    "https://launcher.mlx.yt:45001/api/v2"  # recommended for launching profiles
)
LOCALHOST = "http://127.0.0.1"
HEADERS = {"Accept": "application/json", "Content-Type": "application/json"}
# TODO: Insert your account information in both variables below
USERNAME = ""
PASSWORD = ""
# TODO: Insert the Folder ID and the Profile ID below
FOLDER_ID = ""
PROFILE_ID = ""


def signin() -> str:
    payload = {
        "email": USERNAME,
        "password": hashlib.md5(PASSWORD.encode()).hexdigest(),
    }
    r = requests.post(f"{MLX_BASE}/user/signin", json=payload)
    if r.status_code != 200:
        print(f"\nError during login: {r.text}\n")
    else:
        response = r.json()["data"]
    token = response["token"]
    return token


def start_profile() -> webdriver:
    r = requests.get(
        f"{MLX_LAUNCHER_V2}/profile/f/{FOLDER_ID}/p/{PROFILE_ID}/start?automation_type=selenium",
        headers=HEADERS,
    )
    response = r.json()
    if r.status_code != 200:
        print(f"\nError while starting profile: {r.text}\n")
    else:
        print(f"\nProfile {PROFILE_ID} started.\n")
    selenium_port = response["data"]["port"]
    driver = webdriver.Remote(
        command_executor=f"{LOCALHOST}:{selenium_port}", options=ChromiumOptions()
    )
    # For Stealthfox profiles use: options=Options()
    # For Mimic profiles use: options=ChromiumOptions()
    return driver


def stop_profile() -> None:
    r = requests.get(f"{MLX_LAUNCHER}/profile/stop/p/{PROFILE_ID}", headers=HEADERS)
    if r.status_code != 200:
        print(f"\nError while stopping profile: {r.text}\n")
    else:
        print(f"\nProfile {PROFILE_ID} stopped.\n")


token = signin()
HEADERS.update({"Authorization": f"Bearer {token}"})
driver = start_profile()
driver.get("https://the-internet.herokuapp.com/large")
time.sleep(5)
fetch = driver.find_element(By.ID, "large-table")
print(fetch.text)
stop_profile()

This article includes third-party links that we don’t officially endorse.

 

Was this article helpful?

Give feedback about this article

In this article

  • Step 1: prepare IDE or similar software
  • Step 2: create the script connecting to API and define functions
  • Step 3: choose web page to scrape data from
  • Step 4: look for the target info
  • Step 5: get back to IDE and add new strings of code
  • Step 6: make the script to find the element
  • Step 7: print the end result and stop the profile
  • Step 8: prepare the script before running it
  • Step 9: run the script
  • Notes
  • Full script

Multilogin community

Stay informed, share your thoughts, and engage with others!

Telegram Icon Join us on Telegram

Read more on the topic

Blog Post Img

10 Best Datacenter Proxies for Web Scraping (2025 Edition)

Apr 2, 2025 5 min read
Google SERP Img

What is a Google SERP Proxy and Why Should You Care?

Apr 1, 2025 6 min read
UK Proxy Img

What Are Dedicated UK Proxies? Everything You Need to Know

Apr 1, 2025 6 min read
Related Article Title Icon

Related articles

  • Selenium automation example
  • How to use Selenium browser automation in Multilogin 6

ANTIDETECT PLATFORM

  • Antidetect browser
  • Residential proxies
  • Mobile antidetect browser
  • Multi-account management
  • Headless browser
  • Web automation
  • AI Quick Action Automation

MULTI-ACCOUNTING

  • Create multiple Facebook accounts
  • Create multiple Gmail accounts
  • Create multiple LinkedIn accounts
  • Create multiple Amazon accounts
  • Create multiple Onlyfans accounts
  • Create multiple Twitter accounts
  • Create multiple TikTok accounts

GEO PROXIES

  • UK proxy
  • USA proxy
  • Japan proxy
  • India proxy
  • Pakistan proxy
  • China proxy
  • New Zealand proxy

RESOURCES

  • Knowledge base
  • API documentation
  • Glossary
  • Blog
  • Server status
  • Release notes

COMPARISON

  • Multilogin vs. Gologin
  • Multilogin vs. Adspower
  • Multilogin vs. Dolphin Anty
  • Multilogin vs. Incognition
  • Multilogin vs. Octo Browser
  • Multilogin vs. Undetectable
  • Multilogin vs. MoreLogin

PLATFORM PROXIES

  • Mobile proxy
  • Reddit proxy
  • Facebook proxy
  • SOCKS5 proxy
  • Instagram proxy
  • Onlyfans proxy
  • LinkedIn proxy

FREE TOOLS

  • Online URL to text converter

GET IN TOUCH

  • Contact 24/7 support
    [email protected]
  • Contact sales
  • Referral program
  • Affiliate program
  • Pricing page
  • Careers

© 2025 Multilogin. All rights reserved.

  • Privacy policy
  • Terms of service
  • Cookie policy
Multilogin abstract watermark
  • ANTIDETECT PLATFORM

    • Antidetect browser
    • Residential proxies
    • Mobile antidetect browser
    • Multi-account management
    • Headless browser
    • Web automation
    • AI Quick Action Automation
  • MULTI-ACCOUNTING

    • Create multiple Facebook accounts
    • Create multiple Gmail accounts
    • Create multiple LinkedIn accounts
    • Create multiple Amazon accounts
    • Create multiple Onlyfans accounts
    • Create multiple Twitter accounts
    • Create multiple TikTok accounts
  • GEO PROXIES

    • UK proxy
    • USA proxy
    • Japan proxy
    • India proxy
    • Pakistan proxy
    • China proxy
    • New Zealand proxy
  • RESOURCES

    • Knowledge base
    • API documentation
    • Glossary
    • Blog
    • Server status
    • Release notes
  • COMPARISON

    • Multilogin vs. Gologin
    • Multilogin vs. Adspower
    • Multilogin vs. Dolphin Anty
    • Multilogin vs. Incognition
    • Multilogin vs. Octo Browser
    • Multilogin vs. Undetectable
    • Multilogin vs. MoreLogin
  • PLATFORM PROXIES

    • Mobile proxy
    • Reddit proxy
    • Facebook proxy
    • SOCKS5 proxy
    • Instagram proxy
    • Onlyfans proxy
    • LinkedIn proxy
  • FREE TOOLS

    • Online URL to text converter
  • GET IN TOUCH

    • Contact 24/7 support
      [email protected]
    • Contact sales
    • Referral program
    • Affiliate program
    • Pricing page
    • Careers
Multilogin abstract watermark
  • Privacy policy
  • Terms of service
  • Cookie policy

© 2025 Multilogin. All rights reserved.

Expand