#StackBounty: #python #python-3.x #selenium #web-scraping #python-requests Can't find the right way to grab part numbers from a web…

Bounty: 50

I’m trying to create a script to parse different part numbers from a webpage using requests. If you check on this link and click on Product list tab, you will see the part numbers. This image represents where the part numbers are.

I’ve tried with:

import requests

link = 'https://www.festo.com/cat/en-id_id/products_ADNH'
post_url = 'https://www.festo.com/cfp/camosHTML5Client/cH5C/HRQ'

payload = {"q":4,"ReqID":21,"focus":"f24~v472_0","scroll":[],"events":["e468~12~0~472~0~4","e468_0~6~472"],"ito":22,"kms":4}

with requests.Session() as s:
    s.headers['user-agent'] = 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36'
    s.headers['referer'] = 'https://www.festo.com/cfp/camosHTML5Client/cH5C/go?q=2'
    s.headers['content-type'] = 'application/json; charset=UTF-8'
    r = s.post(post_url,data=payload)
    print(r.json())

When I execute the above script, I get the following result:

{'isRedirect': True, 'url': '../../camosStatic/Exception.html'}

How can I fetch the part numbers from that site using requests?

In case of selenium, I tried like below to fetch the part numbers but it seems the script can’t click on the product list tab if I kick out hardcoded delay from it. Given that I don’t wish to go for any hardcoded delay within the script.

import time
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
 
link = 'https://www.festo.com/cat/en-id_id/products_ADNH'
 
with webdriver.Chrome() as driver:
    driver.get(link)
    wait = WebDriverWait(driver,15)
    wait.until(EC.frame_to_be_available_and_switch_to_it(wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "object")))))
    wait.until(EC.presence_of_element_located((By.CSS_SELECTOR, "#btn-group-cookie > input[value='Accept all cookies']"))).click()
    driver.switch_to.default_content()
    wait.until(EC.frame_to_be_available_and_switch_to_it(wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "iframe#CamosIFId")))))
    
    time.sleep(10)   #I would like to get rid of this hardcoded delay
    
    item = wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "[id='r17'] > [id='f24']")))
    driver.execute_script("arguments[0].click();",item)
    for elem in wait.until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "[data-ctcwgtname='tabTable'] [id^='v471_']")))[1:]:
        print(elem.text)


Get this bounty!!!

#StackBounty: #python #python-3.x #selenium #web-scraping #python-requests Can't find the right way to grab part numbers from a web…

Bounty: 50

I’m trying to create a script to parse different part numbers from a webpage using requests. If you check on this link and click on Product list tab, you will see the part numbers. This image represents where the part numbers are.

I’ve tried with:

import requests

link = 'https://www.festo.com/cat/en-id_id/products_ADNH'
post_url = 'https://www.festo.com/cfp/camosHTML5Client/cH5C/HRQ'

payload = {"q":4,"ReqID":21,"focus":"f24~v472_0","scroll":[],"events":["e468~12~0~472~0~4","e468_0~6~472"],"ito":22,"kms":4}

with requests.Session() as s:
    s.headers['user-agent'] = 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36'
    s.headers['referer'] = 'https://www.festo.com/cfp/camosHTML5Client/cH5C/go?q=2'
    s.headers['content-type'] = 'application/json; charset=UTF-8'
    r = s.post(post_url,data=payload)
    print(r.json())

When I execute the above script, I get the following result:

{'isRedirect': True, 'url': '../../camosStatic/Exception.html'}

How can I fetch the part numbers from that site using requests?

In case of selenium, I tried like below to fetch the part numbers but it seems the script can’t click on the product list tab if I kick out hardcoded delay from it. Given that I don’t wish to go for any hardcoded delay within the script.

import time
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
 
link = 'https://www.festo.com/cat/en-id_id/products_ADNH'
 
with webdriver.Chrome() as driver:
    driver.get(link)
    wait = WebDriverWait(driver,15)
    wait.until(EC.frame_to_be_available_and_switch_to_it(wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "object")))))
    wait.until(EC.presence_of_element_located((By.CSS_SELECTOR, "#btn-group-cookie > input[value='Accept all cookies']"))).click()
    driver.switch_to.default_content()
    wait.until(EC.frame_to_be_available_and_switch_to_it(wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "iframe#CamosIFId")))))
    
    time.sleep(10)   #I would like to get rid of this hardcoded delay
    
    item = wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "[id='r17'] > [id='f24']")))
    driver.execute_script("arguments[0].click();",item)
    for elem in wait.until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "[data-ctcwgtname='tabTable'] [id^='v471_']")))[1:]:
        print(elem.text)


Get this bounty!!!

#StackBounty: #javascript #python #selenium #google-colaboratory Selenium: How to find element in <script src="xx.js"> …

Bounty: 50

I’d like to execute find_element_by_class_name using selenium in google colaboratory.

The following error was displayed.

NoSuchElementException: Message: no such element: Unable to locate element

I found this page was generated by javascript after executing print(driver.page_source).

    <body>
        <script src="../public/vendor.bundle.js"></script>
        <script src="../public/bundle.js"></script>
    </body>

The class name <div class="Graph"> is displayed when I check the chrome developer tool (F12).

How can I get this class using selenium?

edit

html is as below;

<!doctype html>
<html>
    <head>
        <title>xxx</title>

        <link rel="shortcut icon" href="../public/favicon.ico" />
        <link rel="stylesheet" type="text/css" href="../public/bundle.css">

        <script type="text/javascript">

            var contextPath = "/xxx";

            if(location.pathname.match(/(.*)(/S*/S*)/)){
                contextPath = RegExp.$1;
            }

            window.raise = window.raise || {};
            window.raise.appInfo = {
                "xxx" :  'xxx',
                "xxx" : xxx,
                .
                .
                .
            };
        </script>
    </head>
    <body>
        <div id="app-progress" class="AppInitializing"></div>
        <div id="root">

            <noscript>
                <p class="xxx">xxx</p>
            </noscript>
        </div>

        <script src="../public/vendor.bundle.js"></script>
        <script src="../public/bundle.js"></script>
    </body>
    
    
</html>

edit

"Elements" tab in Chrome developer tool is as below;

.
.
<div class="main-body-wrapper">
    <div class="main-body-area">
        <div class="page-grid single GraphPage">
            <h3 class="page-title">...</h3>
                <div class="graph-container">
                    <div class="Graph">
                        <div class="loading">...</div>
                        <div>...</div>
.
.

Python code is as below;

.
.
driver.find_element_by_id("openGraphPage").click()
time.sleep(3)
driver.get(r'https://example.com/graph')
time.sleep(10)
print(driver.current_url)
print(driver.page_source)

graph = driver.find_element_by_class_name("Graph")
.
.

Although I tried find_element_by_xpath, the result was same.


Get this bounty!!!

#StackBounty: #python #python-3.x #selenium How to find an existing HTML element with python-selenium in a jupyterhub page?

Bounty: 200

I have the following construct in a HTML page and I want to select the li element (with python-selenium):

<li class="p-Menu-item p-mod-disabled" data-type="command" data-command="notebook:run-all-below">
    <div class="p-Menu-itemIcon"></div>
    <div class="p-Menu-itemLabel" style="">Run Selected Cell and All Below</div>
    <div class="p-Menu-itemShortcut" style=""></div>
    <div class="p-Menu-itemSubmenuIcon"></div>
</li>

I am using the following xpath:

//li[@data-command='notebook:run-all-below']

But the element does not seem to be found.

Complete, minimal working example code:

import time
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

driver = webdriver.Firefox()
driver.get("https://mybinder.org/v2/gh/jupyterlab/jupyterlab-demo/master?urlpath=lab/tree/demo")

# Wait for the page to be loaded
xpath = "//button[@title='Save the notebook contents and create checkpoint']"
element = WebDriverWait(driver, 600).until(
    EC.presence_of_element_located((By.XPATH, xpath))
)
time.sleep(10)
print("Page loaded")

# Find and click on menu "Run"
xpath_run = "//div[text()='Run']"
element = WebDriverWait(driver, 60).until(
    EC.element_to_be_clickable((By.XPATH, xpath_run))
)
element.click()
print("Clicked on 'Run'")

# Find and click on menu entry "Run Selected Cell and All Below"
xpath_runall = "//li[@data-command='notebook:run-all-below']"
element = WebDriverWait(driver, 600).until(
    EC.element_to_be_clickable((By.XPATH, xpath_runall))
)
print("Found element 'Run Selected Cell and All Below'")
element.click()
print("Clicked on 'Run Selected Cell and All Below'")

driver.close()

Environment:

  • MacOS Mojave (10.14.6)
  • python 3.8.6
  • selenium 3.8.0
  • geckodriver 0.26.0

Addendum

I have been trying to record the steps with the Firefox "Selenium IDE" add-on which gives the following steps for python:

sdriver.get("https://hub.gke2.mybinder.org/user/jupyterlab-jupyterlab-demo-y0bp97e4/lab/tree/demo")
driver.set_window_size(1650, 916)
driver.execute_script("window.scrollTo(0,0)")
driver.find_element(By.CSS_SELECTOR, ".lm-mod-active > .lm-MenuBar-itemLabel").click()

which, of course, also does not work. With that code lines I get an error

selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: .lm-mod-active > .lm-MenuBar-itemLabel


Get this bounty!!!

#StackBounty: #python #amazon-web-services #selenium #selenium-webdriver Selenium works on AWS EC2 but not on AWS Lambda

Bounty: 100

I’ve looked at and tried nearly every other post on this topic with no luck.

EC2

I’m using python 3.6 so I’m using the following AMI amzn-ami-hvm-2018.03.0.20181129-x86_64-gp2 (see here). Once I SSH into my EC2, I download Chrome with:

sudo curl https://intoli.com/install-google-chrome.sh | bash
cp -r /opt/google/chrome/ /home/ec2-user/
google-chrome-stable --version
# Google Chrome 86.0.4240.198 

And download and unzip the matching Chromedriver:

sudo wget https://chromedriver.storage.googleapis.com/86.0.4240.22/chromedriver_linux64.zip
sudo unzip chromedriver_linux64.zip

I install python36 and selenium with:

sudo yum install python36 -y
sudo /usr/bin/pip-3.6 install selenium

Then run the script:

import os
import selenium
from selenium import webdriver

CURR_PATH = os.getcwd()
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--headless')
chrome_options.add_argument('--window-size=1280x1696')
chrome_options.add_argument('--disable-gpu')
chrome_options.add_argument('--disable-dev-shm-usage')
chrome_options.add_argument('--hide-scrollbars')
chrome_options.add_argument('--enable-logging')
chrome_options.add_argument('--log-level=0')
chrome_options.add_argument('--v=99')
chrome_options.add_argument('--single-process')
chrome_options.add_argument('--ignore-certificate-errors')
chrome_options.add_argument('--remote-debugging-port=9222')
chrome_options.binary_location = f"{CURR_PATH}/chrome/google-chrome"
driver = webdriver.Chrome(
    executable_path = f"{CURR_PATH}/chromedriver",
    chrome_options=chrome_options
)
driver.get("https://www.google.com/")
html = driver.page_source
print(html)

This works

Lambda

I then zip my chromedriver and Chrome files:

mkdir tmp
mv chromedriver tmp
mv chrome tmp
cd tmp
zip -r9 ../chrome.zip chromedriver chrome

And copy the zipped file to an S3 bucket

This is my lambda function:

import os
import boto3
from botocore.exceptions import ClientError
import zipfile
import selenium
from selenium import webdriver

s3 = boto3.resource('s3')

def handler(event, context):
    chrome_bucket = os.environ.get('CHROME_S3_BUCKET')
    chrome_key = os.environ.get('CHROME_S3_KEY')
    # DOWNLOAD HEADLESS CHROME FROM S3
    try:    
        # with open('/tmp/headless_chrome.zip', 'wb') as data:
        s3.meta.client.download_file(chrome_bucket, chrome_key, '/tmp/chrome.zip')
        print(os.listdir('/tmp'))
    except ClientError as e:
        raise e
    # UNZIP HEADLESS CHROME
    try:
        with zipfile.ZipFile('/tmp/chrome.zip', 'r') as zip_ref:
            zip_ref.extractall('/tmp')
        # FREE UP SPACE
        os.remove('/tmp/chrome.zip')
        print(os.listdir('/tmp'))
    except:
        raise ValueError('Problem with unzipping Chrome executable')
    # CHANGE PERMISSION OF CHROME
    try:
        os.chmod('/tmp/chromedriver', 0o775)
        os.chmod('/tmp/chrome/chrome', 0o775)
        os.chmod('/tmp/chrome/google-chrome', 0o775)
    except:
        raise ValueError('Problem with changing permissions to Chrome executable')
    # GET LINKS
    chrome_options = webdriver.ChromeOptions()
    chrome_options.add_argument('--no-sandbox')
    chrome_options.add_argument('--headless')
    chrome_options.add_argument('--window-size=1280x1696')
    chrome_options.add_argument('--disable-gpu')
    chrome_options.add_argument('--disable-dev-shm-usage')
    chrome_options.add_argument('--hide-scrollbars')
    chrome_options.add_argument('--enable-logging')
    chrome_options.add_argument('--log-level=0')
    chrome_options.add_argument('--v=99')
    chrome_options.add_argument('--single-process')
    chrome_options.add_argument('--ignore-certificate-errors')
    chrome_options.add_argument('--remote-debugging-port=9222')
    chrome_options.binary_location = "/tmp/chrome/google-chrome"
    driver = webdriver.Chrome(
        executable_path = "/tmp/chromedriver",
        chrome_options=chrome_options
    )
    driver.get("https://www.google.com/")
    html = driver.page_source
    print(html)

I’m able to see my unzipped files in the /tmp path.

And my error:

{
  "errorMessage": "Message: unknown error: unable to discover open pagesn",
  "errorType": "WebDriverException",
  "stackTrace": [
    [
      "/var/task/lib/observer.py",
      69,
      "handler",
      "chrome_options=chrome_options"
    ],
    [
      "/var/task/selenium/webdriver/chrome/webdriver.py",
      81,
      "__init__",
      "desired_capabilities=desired_capabilities)"
    ],
    [
      "/var/task/selenium/webdriver/remote/webdriver.py",
      157,
      "__init__",
      "self.start_session(capabilities, browser_profile)"
    ],
    [
      "/var/task/selenium/webdriver/remote/webdriver.py",
      252,
      "start_session",
      "response = self.execute(Command.NEW_SESSION, parameters)"
    ],
    [
      "/var/task/selenium/webdriver/remote/webdriver.py",
      321,
      "execute",
      "self.error_handler.check_response(response)"
    ],
    [
      "/var/task/selenium/webdriver/remote/errorhandler.py",
      242,
      "check_response",
      "raise exception_class(message, screen, stacktrace)"
    ]
  ]
}

EDIT: I am willing to try out anything at this point. Different versions of Chrome or Chromium, Chromedriver, Python or Selenium.


Get this bounty!!!

#StackBounty: #command-line #python #google-chrome #cron #selenium Cannot create a crontab job for my scrapy program

Bounty: 100

I have written a small Python scraper (using Scrapy framework). The scraper requires a headless browse… I am using ChromeDriver.

As I am running this code on an Ubuntu server which does not have any GUI, I had to install Xvfb in order to run ChromeDriver on my Ubuntu server (I followed this guide)

This is my code:

class MySpider(scrapy.Spider):
    name = 'my_spider'

    def __init__(self):
        # self.driver = webdriver.Chrome(ChromeDriverManager().install())
        chrome_options = Options()
        chrome_options.add_argument('--headless')
        chrome_options.add_argument('--no-sandbox')
        chrome_options.add_argument('--disable-dev-shm-usage')
        self.driver = webdriver.Chrome('/usr/bin/chromedriver', chrome_options=chrome_options)

I can run the above code from Ubuntu shell and it execute without any errors:

ubuntu@ip-1-2-3-4:~/scrapers/my_scraper$ scrapy crawl my_spider

Now I want to setup a cron job to run the above command everyday:

# m h  dom mon dow   command
PATH=/usr/local/bin:/home/ubuntu/.local/bin/
05 12 * * * cd /home/ubuntu/scrapers/my_scraper && scrapy crawl my_spider >> /tmp/scraper.log 2>&1

but the crontab job gives me the following error:

Traceback (most recent call last):
  File "/home/ubuntu/.local/lib/python3.6/site-packages/scrapy/crawler.py", line 192, in crawl
    return self._crawl(crawler, *args, **kwargs)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/scrapy/crawler.py", line 196, in _crawl
    d = crawler.crawl(*args, **kwargs)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/twisted/internet/defer.py", line 1613, in unwindGenerator
    return _cancellableInlineCallbacks(gen)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/twisted/internet/defer.py", line 1529, in _cancellableInlineCallbacks
    _inlineCallbacks(None, g, status)
--- <exception caught here> ---
  File "/home/ubuntu/.local/lib/python3.6/site-packages/twisted/internet/defer.py", line 1418, in _inlineCallbacks
    result = g.send(result)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/scrapy/crawler.py", line 86, in crawl
    self.spider = self._create_spider(*args, **kwargs)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/scrapy/crawler.py", line 98, in _create_spider
    return self.spidercls.from_crawler(self, *args, **kwargs)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/scrapy/spiders/__init__.py", line 19, in from_crawler
    spider = cls(*args, **kwargs)
  File "/home/ubuntu/scrapers/my_scraper/my_scraper/spiders/spider.py", line 27, in __init__
    self.driver = webdriver.Chrome('/usr/bin/chromedriver', chrome_options=chrome_options)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/selenium/webdriver/chrome/webdriver.py", line 81, in __init__
    desired_capabilities=desired_capabilities)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 157, in __init__
    self.start_session(capabilities, browser_profile)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 252, in start_session
    response = self.execute(Command.NEW_SESSION, parameters)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute
    self.error_handler.check_response(response)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
    raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: unknown error: Chrome failed to start: exited abnormally
  (unknown error: DevToolsActivePort file doesn't exist)
  (The process started from chrome location /usr/bin/google-chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
  (Driver info: chromedriver=2.41.578700 (2f1ed5f9343c13f73144538f15c00b370eda6706),platform=Linux 5.4.0-1029-aws x86_64)

Update

This answer help me solve the issue (but I don’t quite understand why)

I ran echo $PATH on my Ubuntu shell and copied the value into the crontab:

PATH=/home/ubuntu/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
05 12 * * * cd /home/ubuntu/scrapers/my_scraper && scrapy crawl my_spider >> /tmp/scraper.log 2>&1

Note: As I have created a bounty for this question, I am happy to award it to any answer which explain why changing the PATH solved the issue.


Get this bounty!!!

#StackBounty: #python #selenium #web-scraping #scrapy #web-crawler how to run spider multiple times with different input

Bounty: 50

I’m trying to scrape information from different sites about some products. Here is the structure of my program:

product_list = [iPad, iPhone, AirPods, ...]

def spider_tmall:
    self.driver.find_element_by_id('searchKeywords').send_keys(inputlist[a])

# ...


def spider_jd:
    self.driver.find_element_by_id('searchKeywords').send_keys(inputlist[a])

# ...

if __name__ == '__main__':

    for a in range(len(inputlist)):
        process = CrawlerProcess(settings={
            "FEEDS": {
                "itemtmall.csv": {"format": "csv",
                                  'fields': ['product_name_tmall', 'product_price_tmall', 'product_discount_tmall'], },
                "itemjd.csv": {"format": "csv",
                               'fields': ['product_name_jd', 'product_price_jd', 'product_discount_jd'], },
        })

        process.crawl(tmallSpider)
        process.crawl(jdSpider)
        process.start()

Basically, I want to run all spiders for all inputs in product_list. Right now, my program only runs through all spiders once (in the case, it does the job for iPad) then there is ReactorNotRestartable Error and the program terminates. Anybody knows how to fix it?
Also, my overall goal is to run the spider multiple times, the input doesn’t necessarily have to be a list. It can be a CSV file or something else. Any suggestion would be appreciated!


Get this bounty!!!

#StackBounty: #c# #selenium #selenium-webdriver #deprecation-warning How to fix 'IHasInputDevices is obsolete. Use the Actions or A…

Bounty: 50

After upgrading the Selenium NuGet packages in our solution to version 3.141.0 (both Selenium.WebDriver and Selenium.Support), the IHasInputDevices interface now has a warning:

‘IHasInputDevices’ is obsolete. ‘Use the Actions or ActionsBuilder class to simulate mouse and keyboard input.’

Screenshot of deprecation warning in Visual Studio

I created a utility class called LazyWebDriver, which implements the IWebDriver, IHasInputDevices and IActionExecutor interfaces. The LazyWebDriver class delays the instantiation of ChromeDriver until a member of IWebDriver gets accessed. This allows us to pass an IWebDriver object around and delay the appearance of the browser window, in case a test fails during the setup phase.

Code for the LazyWebDriver class:

public class LazyWebDriver : IWebDriver/*, IHasInputDevices*/, IActionExecutor
{
    private System.Func<IWebDriver> createDriver;
    private IWebDriver driver;

    private IWebDriver Driver
    {
        get
        {
            if (driver == null)
                driver = createDriver();

            return driver;
        }
    }

    public string Url
    {
        get => Driver.Url;
        set => Driver.Url = value;
    }

    public string Title => Driver.Title;

    public string PageSource => Driver.PageSource;

    public string CurrentWindowHandle => Driver.CurrentWindowHandle;

    public ReadOnlyCollection<string> WindowHandles => Driver.WindowHandles;

    public IKeyboard Keyboard => ((IHasInputDevices)Driver).Keyboard;

    public IMouse Mouse => ((IHasInputDevices)Driver).Mouse;

    public bool IsActionExecutor => ((IActionExecutor)Driver).IsActionExecutor;

    public LazyWebDriver(System.Func<IWebDriver> createDriver)
    {
        this.createDriver = createDriver;
    }

    public void Close()
    {
        Driver.Close();
    }

    public void Dispose()
    {
        Driver.Dispose();
    }

    public IWebElement FindElement(By by)
    {
        return Driver.FindElement(by);
    }

    public ReadOnlyCollection<IWebElement> FindElements(By by)
    {
        return Driver.FindElements(by);
    }

    public IOptions Manage()
    {
        return Driver.Manage();
    }

    public INavigation Navigate()
    {
        return Driver.Navigate();
    }

    public void Quit()
    {
        Driver.Quit();
    }

    public ITargetLocator SwitchTo()
    {
        return Driver.SwitchTo();
    }

    public void PerformActions(IList<ActionSequence> actionSequenceList)
    {
        ((IActionExecutor)Driver).PerformActions(actionSequenceList);
    }

    public void ResetInputState()
    {
        ((IActionExecutor)Driver).ResetInputState();
    }
}

The warning indicates to use the Actions or ActionBuilder class, so I removed the IHasInputDevices interface from the LazyWebDriver class, and attempted to use the Actions class:

[TestClass]
public class DeprecatedInterfaceTest
{
    [TestMethod]
    public void Test()
    {
        using (var driver = new LazyWebDriver(() => new ChromeDriver()))
        {
            driver.Navigate().GoToUrl("https://www.stackoverflow.com");

            var link = driver.FindElement(By.CssSelector("a[href='/teams/customers']"));
            var actions = new Actions(driver);

            actions = actions.MoveToElement(link);
            actions = actions.Click(link);
            actions.Perform();
        }
    }
}

The test failed with the following error message:

Test method DeprecatedInterfaceTest.Test threw exception:
System.ArgumentException: The IWebDriver object must implement or wrap a driver that implements IHasInputDevices.
Parameter name: driver

The test fails at this line:

var actions = new Actions(driver);

I did some searching online, and I didn’t find a way to eliminate the IHasInputDevices interface and use the Actions class as indicated in the obsolete warning. It also appears the ActionBuilder class is used to queue up a bunch of Actions objects.

How can I eliminate the IHasInputDevices interface and still use the Actions class?


Get this bounty!!!

#StackBounty: #google-chrome #proxy #selenium How to setup user and password for proxy server in Chrome/Chromium via argument?

Bounty: 100

I’m using Selenium to automate some stuff, i need to use proxy but my proxy use Basic access authentication so i need to put user and password, i already using: --proxy-server=http://127.0.0.1:8001 arg to setup proxy server, doing some research i found this arg: --proxy-user-and-password but it seems that is invalid. Is possible to configure this via arg?


Get this bounty!!!

#StackBounty: #java #selenium #browsermob-proxy BrowserMobProxy with Selenium Java

Bounty: 50

I’m trying to save har content into my local drive using Selenium Java UI script. When I try to do that i’m receiving some set of errors. I went through several blocks and I have updated jar file Guava to latest version but still no success.

Code Snippet:

public static void main(String[] args) {

     System.out.println("Hi");

     ChromeOptions options = new ChromeOptions();
        BrowserMobProxy proxy = new BrowserMobProxyServer();
     proxy.start(8080); 


     Proxy seleniumProxy = ClientUtil.createSeleniumProxy(proxy);



     DesiredCapabilities capabilities = new DesiredCapabilities();
     capabilities.setCapability(CapabilityType.PROXY, seleniumProxy);
     capabilities.setCapability(ChromeOptions.CAPABILITY, options);

     System.setProperty("webdriver.chrome.driver", "D:\Selenium Files\chromedriver_win32 (3)\chromedriver.exe");
    @SuppressWarnings("deprecation")
    WebDriver driver =  new ChromeDriver(capabilities);


    driver.manage().window().maximize();
//  
    proxy.newHar("test");
    driver.get("https://google.com");

    proxy.stop();

}

Error:

07:43:43.059 [LittleProxy-0-ClientToProxyWorker-4] ERROR org.littleshoot.proxy.impl.ClientToProxyConnection - (AWAITING_INITIAL) [id: 0xca565cec, L:/192.168.1.4:8080 - R:/192.168.1.4:62453]: Caught an exception on ClientToProxyConnection
java.lang.NoSuchMethodError: com.google.common.net.HostAndPort.getHostText()Ljava/lang/String;
    at org.littleshoot.proxy.impl.ProxyToServerConnection.addressFor(ProxyToServerConnection.java:954) ~[browsermob-dist-2.1.4.jar:?]
    at org.littleshoot.proxy.impl.ProxyToServerConnection.setupConnectionParameters(ProxyToServerConnection.java:832) ~[browsermob-dist-2.1.4.jar:?]
    at org.littleshoot.proxy.impl.ProxyToServerConnection.<init>(ProxyToServerConnection.java:199) ~[browsermob-dist-2.1.4.jar:?]
    at org.littleshoot.proxy.impl.ProxyToServerConnection.create(ProxyToServerConnection.java:173) ~[browsermob-dist-2.1.4.jar:?]
    at org.littleshoot.proxy.impl.ClientToProxyConnection.doReadHTTPInitial(ClientToProxyConnection.java:284) ~[browsermob-dist-2.1.4.jar:?]
    at org.littleshoot.proxy.impl.ClientToProxyConnection.readHTTPInitial(ClientToProxyConnection.java:191) ~[browsermob-dist-2.1.4.jar:?]
    at org.littleshoot.proxy.impl.ClientToProxyConnection.readHTTPInitial(ClientToProxyConnection.java:80) ~[browsermob-dist-2.1.4.jar:?]
    at org.littleshoot.proxy.impl.ProxyConnection.readHTTP(ProxyConnection.java:135) ~[browsermob-dist-2.1.4.jar:?]
    at org.littleshoot.proxy.impl.ProxyConnection.read(ProxyConnection.java:120) ~[browsermob-dist-2.1.4.jar:?]
    at org.littleshoot.proxy.impl.ProxyConnection.channelRead0(ProxyConnection.java:587) ~[browsermob-dist-2.1.4.jar:?]
    at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) ~[browsermob-dist-2.1.4.jar:?]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373) [browsermob-dist-2.1.4.jar:?]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359) [browsermob-dist-2.1.4.jar:?]
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351) [browsermob-dist-2.1.4.jar:?]
    at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) [browsermob-dist-2.1.4.jar:?]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373) [browsermob-dist-2.1.4.jar:?]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359) [browsermob-dist-2.1.4.jar:?]
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351) [browsermob-dist-2.1.4.jar:?]
    at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) [browsermob-dist-2.1.4.jar:?]
    at org.littleshoot.proxy.impl.ProxyConnection$RequestReadMonitor.channelRead(ProxyConnection.java:715) [browsermob-dist-2.1.4.jar:?]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373) [browsermob-dist-2.1.4.jar:?]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359) [browsermob-dist-2.1.4.jar:?]
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351) [browsermob-dist-2.1.4.jar:?]
    at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) [browsermob-dist-2.1.4.jar:?]
    at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267) [browsermob-dist-2.1.4.jar:?]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373) [browsermob-dist-2.1.4.jar:?]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359) [browsermob-dist-2.1.4.jar:?]
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351) [browsermob-dist-2.1.4.jar:?]
    at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) [browsermob-dist-2.1.4.jar:?]
    at org.littleshoot.proxy.impl.ProxyConnection$BytesReadMonitor.channelRead(ProxyConnection.java:692) [browsermob-dist-2.1.4.jar:?]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373) [browsermob-dist-2.1.4.jar:?]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359) [browsermob-dist-2.1.4.jar:?]
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351) [browsermob-dist-2.1.4.jar:?]
    at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) [browsermob-dist-2.1.4.jar:?]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373) [browsermob-dist-2.1.4.jar:?]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359) [browsermob-dist-2.1.4.jar:?]
    at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) [browsermob-dist-2.1.4.jar:?]
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129) [browsermob-dist-2.1.4.jar:?]
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:651) [browsermob-dist-2.1.4.jar:?]
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:574) [browsermob-dist-2.1.4.jar:?]
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:488) [browsermob-dist-2.1.4.jar:?]
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:450) [browsermob-dist-2.1.4.jar:?]
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873) [browsermob-dist-2.1.4.jar:?]
    at java.lang.Thread.run(Thread.java:834) [?:?]

I have configured following jar files in my Project.

enter image description here

enter image description here


Get this bounty!!!