From: Thomas 'PointedEars' Lahn on
srikanth wrote:

> I need some help to develop a script which needs to open a browser
> with speicifed URL and get the response back.
> Right now I am using xdg-open 'http://www.google.com'. Here i am using
> xdg-open because of they need to open in my default set browser.
> Here I want to specify all the URLs in a text file. My script should
> need to fetch the url from that input file and open in a browser and
> get back the result.
> Ex: If the site loads perfectly then it should say 'OK'. If the site
> show Error 404 then it should specify Error 404 etc..

You do not need a browser as you only want to make an HTTP request.
libwww-perl contains `HEAD' (an alias for lwp-request(1p)) which option -d
does something very similar to what you describe.

Web site content can vary based on the request headers; you can use HEAD's
-H option to send specific HTTP headers, like User-Agent, Accept-Language
aso. You can even log in to HTTP Auth-protected sites with the -C option.

$ HEAD -d http://google.com/
200 OK
$ HEAD -d http://groups.google.com/
403 Forbidden
$ HEAD -d http://groups.google.com/ -H 'User-Agent: Firefox/1.0'
200 OK
$ HEAD -d http://groups.google.com/foo -H 'User-Agent: Firefox/1.0'
404 Not Found

Reading the URI from a file and doing that in a loop is left as an exercise
to the reader.


PointedEars
From: srikanth on
On May 27, 2:41 pm, Thomas 'PointedEars' Lahn <PointedE...(a)web.de>
wrote:
> srikanth wrote:
> > I need some help to develop a script which needs to open a browser
> > with speicifed URL and get the response back.
> > Right now I am using xdg-open 'http://www.google.com'. Here i am using
> > xdg-open because of they need to open in my default set browser.
> > Here I want to specify all the URLs in a text file. My script should
> > need to fetch the url from that input file and open in a browser and
> > get back the result.
> > Ex: If the site loads perfectly then it should say 'OK'. If the site
> > show Error 404 then it should specify Error 404 etc..
>
> You do not need a browser as you only want to make an HTTP request.
> libwww-perl contains `HEAD' (an alias for lwp-request(1p)) which option -d
> does something very similar to what you describe.
>
> Web site content can vary based on the request headers; you can use HEAD's
> -H option to send specific HTTP headers, like User-Agent, Accept-Language
> aso.  You can even log in to HTTP Auth-protected sites with the -C option.
>
> $ HEAD -dhttp://google.com/
> 200 OK
> $ HEAD -dhttp://groups.google.com/
> 403 Forbidden
> $ HEAD -dhttp://groups.google.com/-H 'User-Agent: Firefox/1.0'
> 200 OK
> $ HEAD -dhttp://groups.google.com/foo-H 'User-Agent: Firefox/1.0'
> 404 Not Found
>
> Reading the URI from a file and doing that in a loop is left as an exercise
> to the reader.
>
> PointedEars

Hi Thomas,
Thanks for the detailed info and also about the libwww-perl. First
time i am seeing these utilities. It would be great if you can give me
any useful info related to command line browsers like you have told
now.

Thanks one and all for nice suggestions.

Is there any way to get the response by using wget, curl?
From: Kenny McCormack on
In article <abcd6074-e00d-4681-ab02-72017f0b4724(a)h20g2000prn.googlegroups.com>,
srikanth <srikanth007m(a)gmail.com> wrote:
>On May 27, 2:41�pm, Thomas 'PointedEars' Lahn <PointedE...(a)web.de>
>wrote:
>> srikanth wrote:
>> > I need some help to develop a script which needs to open a browser
>> > with speicifed URL and get the response back.
>> > Right now I am using xdg-open 'http://www.google.com'. Here i am using
>> > xdg-open because of they need to open in my default set browser.
>> > Here I want to specify all the URLs in a text file. My script should
>> > need to fetch the url from that input file and open in a browser and
>> > get back the result.
>> > Ex: If the site loads perfectly then it should say 'OK'. If the site
>> > show Error 404 then it should specify Error 404 etc..
>>
>> You do not need a browser as you only want to make an HTTP request.
>> libwww-perl contains `HEAD' (an alias for lwp-request(1p)) which option -d
>> does something very similar to what you describe.
>>
>> Web site content can vary based on the request headers; you can use HEAD's
>> -H option to send specific HTTP headers, like User-Agent, Accept-Language
>> aso. �You can even log in to HTTP Auth-protected sites with the -C option.
>>
>> $ HEAD -dhttp://google.com/
>> 200 OK
>> $ HEAD -dhttp://groups.google.com/
>> 403 Forbidden
>> $ HEAD -dhttp://groups.google.com/-H 'User-Agent: Firefox/1.0'
>> 200 OK
>> $ HEAD -dhttp://groups.google.com/foo-H 'User-Agent: Firefox/1.0'
>> 404 Not Found
>>
>> Reading the URI from a file and doing that in a loop is left as an exercise
>> to the reader.
>>
>> PointedEars
>
>Hi Thomas,
>Thanks for the detailed info and also about the libwww-perl. First
>time i am seeing these utilities. It would be great if you can give me
>any useful info related to command line browsers like you have told
>now.
>
>Thanks one and all for nice suggestions.
>
>Is there any way to get the response by using wget, curl?

I just use "lynx -dump" and "lynx -source".

Not sure if this has been mentioned yet.

--
> No, I haven't, that's why I'm asking questions. If you won't help me,
> why don't you just go find your lost manhood elsewhere.

CLC in a nutshell.

From: srikanth on
On May 27, 8:45 pm, gaze...(a)shell.xmission.com (Kenny McCormack)
wrote:
> In article <abcd6074-e00d-4681-ab02-72017f0b4...(a)h20g2000prn.googlegroups..com>,
>
>
>
>
>
> srikanth  <srikanth0...(a)gmail.com> wrote:
> >On May 27, 2:41 pm, Thomas 'PointedEars' Lahn <PointedE...(a)web.de>
> >wrote:
> >> srikanth wrote:
> >> > I need some help to develop a script which needs to open a browser
> >> > with speicifed URL and get the response back.
> >> > Right now I am using xdg-open 'http://www.google.com'. Here i am using
> >> > xdg-open because of they need to open in my default set browser.
> >> > Here I want to specify all the URLs in a text file. My script should
> >> > need to fetch the url from that input file and open in a browser and
> >> > get back the result.
> >> > Ex: If the site loads perfectly then it should say 'OK'. If the site
> >> > show Error 404 then it should specify Error 404 etc..
>
> >> You do not need a browser as you only want to make an HTTP request.
> >> libwww-perl contains `HEAD' (an alias for lwp-request(1p)) which option -d
> >> does something very similar to what you describe.
>
> >> Web site content can vary based on the request headers; you can use HEAD's
> >> -H option to send specific HTTP headers, like User-Agent, Accept-Language
> >> aso.  You can even log in to HTTP Auth-protected sites with the -C option.
>
> >> $ HEAD -dhttp://google.com/
> >> 200 OK
> >> $ HEAD -dhttp://groups.google.com/
> >> 403 Forbidden
> >> $ HEAD -dhttp://groups.google.com/-H'User-Agent: Firefox/1.0'
> >> 200 OK
> >> $ HEAD -dhttp://groups.google.com/foo-H'User-Agent: Firefox/1.0'
> >> 404 Not Found
>
> >> Reading the URI from a file and doing that in a loop is left as an exercise
> >> to the reader.
>
> >> PointedEars
>
> >Hi Thomas,
> >Thanks for the detailed info and also about the libwww-perl. First
> >time i am seeing these utilities. It would be great if you can give me
> >any useful info related to command line browsers like you have told
> >now.
>
> >Thanks one and all for nice suggestions.
>
> >Is there any way to get the response by using  wget, curl?
>
> I just use "lynx -dump" and "lynx -source".
>
> Not sure if this has been mentioned yet.
>
> --
>
> > No, I haven't, that's why I'm asking questions. If you won't help me,
> > why don't you just go find your lost manhood elsewhere.
>
> CLC in a nutshell.- Hide quoted text -
>
> - Show quoted text -

Most of the packages was not there in my linux box. Even apt-get and
aptitude are not there to install any packages. Only basic ones are
there on my linux box. Does these packeges won't be present i think.

Right now i am using wget to know the http status.

But Thomos gave nice suggestion to use HEAD to get the HTTP status of
the given URL.
From: Thomas 'PointedEars' Lahn on
srikanth wrote:

> Thomas 'PointedEars' Lahn wrote:
>> You do not need a browser as you only want to make an HTTP request.
>> libwww-perl contains `HEAD' (an alias for lwp-request(1p)) which option
>> -d does something very similar to what you describe.
>>
>> Web site content can vary based on the request headers; you can use
>> HEAD's -H option to send specific HTTP headers, like User-Agent,
>> Accept-Language aso. You can even log in to HTTP Auth-protected sites
>> with the -C option.
>>
>> [examples]
>>
>> Reading the URI from a file and doing that in a loop is left as an
>> exercise to the reader.
>
> Thanks for the detailed info and also about the libwww-perl.

You're welcome.

> First time i am seeing these utilities. It would be great if you can give
> me any useful info related to command line browsers like you have told
> now.

I do not understand your question. I have not referred you to "command line
browsers"; I have said that you do not need *any* Web browser as you only
want to make an HTTP request. All you need is an HTTP client implementation
(which is also part of or used by Web browsers).

In fact, using a plain-text browser like lynx(1), links(1) or w3m(1) might
result in quite different response status codes, as I indicated -- obviously
Google is filtering either in favor of the substring "Firefox" or against
the substring "lwp-request" in their Groups application, BTW a misguided
approach:

$ nc -lp 1337 &
[1] 21574
$ HEAD http://localhost:1337
HEAD / HTTP/1.1
TE: deflate,gzip;q=0.3
Connection: TE, close
Host: localhost:1337
User-Agent: lwp-request/5.810

^C
[1]+ Done nc -lp 1337

> Is there any way to get the response by using wget, curl?

Yes.

Please trim your quotes to the relevant minimum, usually do not quote
signatures.

--
PointedEars