From: script||die on
On 01/18/2010 04:21 PM, Stephen Horne wrote:
>
> Is there a tool that can create a software repository on a USB flash
> drive, including all software currently installed on one machine, so
> that I can use that repository to install the same software on a
> non-networked machine?
>
> I have my desktop PC running Linux well enough, but I also have a
> laptop which hasn't been connected to any network for years. Running
> Windows, this was a policy thing - the machine slows down too much
> when running firewall, antivirus etc.
>
> On Linux, performance is fine, but I have completely failed to get the
> laptop to connect via my ethernet cable modem. This also fails in
> Windows - something to do with failing to acquire an IP address. I
> have also failed to get a WIFI network connection running between the
> laptop and desktop PCs.
>
> I have my doubts about whether the ethernet port is faulty, though I
> don't think I ever got that PCMCIA WIFI card to work.
>
> Anyway, this is 90% fine. The laptop has been doing non-internet
> duties for several years, and that's all I want it to do. The only
> trouble is getting software installed that isn't included on the
> OpenSUSE install DVD.
>
> With all the repositories currently available online, I'm guessing
> there are tools that can manage these repositories, that may be able
> to do what I want?
>

There's a lot of stuff in there.

First, if you're interested in info, I got an SMC /n wifi router and
though I have never been able to run their matching usb wifi, their
paired PCMCIA wifi (SMCWCB-N) works outta the box under Suse since 10.3
I think, just set it up with Yast, no NetworkMananger no Ndisnothing.

As for the repos I and maybe thousands others have the same problem.

Getting the whole distribution file tree is the hard part, keeping the
update tree update is harder, and loading only what's in use or needed
is hardest. All this could be so much simpler if you could just tell
Yast here's where I want you to save all downloaded files, and here's
where I want you to read them from (always in terms of parent folders,
dynamic indexing in ram only, and selectable deltas/rpms whatever). Then
you could take that portable drive or usb to another machine and bingo.

For now

Before beginning the move Yast will export to wherever you want
(should always be this way) the packages list. You can later import this
same list as the basis for completing an initially minimal install after
the first boot on another machine (or a new install on another partition).

/etc/yzpp/repos.d is the folder I copy into the new system on the other
machine. This folder is where Yast notes the local rpm repo paths for
itself (it takes a lot of time to configure it).

Copy the the populated local repo rpm tree to the usb. Then mount it
under a mountpoint that makes it look identical with the paths in
/etc/yzpp/repos.d I use an old drive with a usb adapter to truck the
stuff. It has an /sa15 root folder and /comp/fix/suse112 under it
(along with suse103 and suse111). In any machine I mount that drive
under mountpoint /0 so the suse112 folder with the repo folders always
looks like /0/sa15/comp/fix/suse112 to Yast once mounted. Now when you
launch yast on the new system it reads in from that tree and recreates
the index list (I guess) for itself.

For updating the local repo tree I script rsync and run it every now and
then, here's a short piece to show the idea (I'm NO rsync driver).
This will surely be wrapped so you may need to straighten it out.
There are about a dozen such sections one for each major repo folder,
the packman repos have very long exclude lists of huge files I don't
want. I wish everyone would use separate folders for games, java,
internationalization etc.

#!/bin/bash
cd /0/sa15/comp/fix/suse112
echo "START = ftp5-gwdg-de/pub/opensuse/update/11.2/rpm/i586/
/0/sa15/comp/fix/suse112/suse-up-i586"
rsync -vidhut --bwlimit=10 --progress \
--exclude *debug* \
--exclude *delta* \
--exclude *patch* \
--exclude *-info* \
--exclude INDEX* \
--exclude *acroread* \
--exclude *xen* \
--exclude eclipse* \
--exclude *-i18n* \
--exclude *-html* \
--exclude *ava* \
--exclude OpenOffice* \
--exclude *devel-doc* \
--delete-excluded \
--delete-after \
ftp5.gwdg.de::pub/opensuse/update/11.2/rpm/i586/
/0/sa15/comp/fix/suse112/suse-up-i586
echo
"####################################################################################"

exit




From: Eef Hartman on
Mark S Bilk <mark(a)cosmicpenguin.com> wrote:
> http://download.opensuse.org/repositories/KDE:/KDE4:/Playground/openSUSE_11.2/
>
> using wget -m -np http:...
>
> but it turned out to be full of links, and actually downloaded
> the files into five subdirectories, each with the files from a
> different server:

Yeah, that's how download.opensuse.org works, it distributes all requests
over a set of mirrors, dynamically.
What you SHOULD do is to use the rsync server, see:
http://en.opensuse.org/Mirror_Infrastructure

in short, you can get any subtree of one of the standard rsync modules,
by appending the path to the rsync module.
Your example above would be something like:
rsync -rlpt rsync.opensuse.org::buildservice-repos/KDE:/KDE4:/Playground/openSUSE_11.2/ <destination_dir>
(and you test it by leaving out the destination dir, then you just get
a list of files "to be transferred").

PS: without the trailing / the tree starts with openSUSE_11.2, WITH:
only the contents of that dir are transferred.

With other rsync modules you can also get the distribution repo's
(note: these are a different kind: yast-type of repo, which you cannot
create/adjust with createrepo, that util makes a yum cq repo-md type
of repo) or the updates.
--
*******************************************************************
** Eef Hartman, Delft University of Technology, dept. SSC/ICT **
** e-mail: E.J.M.Hartman(a)tudelft.nl - phone: +31-15-278 82525 **
*******************************************************************
From: David Bolt on
On Tuesday 19 Jan 2010 06:03, while playing with a tin of spray paint,
script||die painted this mural:


> Getting the whole distribution file tree is the hard part,

Not really. I maintain a local web server specifically for network
installs[0] of the various versions, all of which were retrieved using
rsync to create the local mirror.

> keeping the
> update tree update is harder,

A daily/weekly cron job to mirror one of the update mirrors works
perfectly for me, and has done for several years now. I think I first
started doing this with SuSE 8.0 or 8.1, not sure which one. And, once
I set it up, I've had it mirroring all the versions since 7.0.

> and loading only what's in use or needed
> is hardest.

That is a much harder task. You'd need to have a complete list of all
the packages installed on each machine, including any possible
architecture or release differences. Maintaining them would be a bit of
a pain so it'd probably be easier to just mirror the entire selection
of packages.

> All this could be so much simpler if you could just tell
> Yast here's where I want you to save all downloaded files, and here's
> where I want you to read them from (always in terms of parent folders,
> dynamic indexing in ram only, and selectable deltas/rpms whatever). Then
> you could take that portable drive or usb to another machine and bingo.

A USB drive would be a better option if mirroring the updates is
included as there's quite often package and/or metadata changes, and
it's a good idea to minimise writes to USB memory sticks. Also, while a
USB key may (possibly) be a little faster, they aren't up to the same
capacity as a hard drive and are also a lot more expensive.

> Before beginning the move Yast will export to wherever you want
> (should always be this way) the packages list. You can later import this
> same list as the basis for completing an initially minimal install after
> the first boot on another machine (or a new install on another partition).
>
> /etc/yzpp/repos.d is the folder I copy into the new system on the other
> machine. This folder is where Yast notes the local rpm repo paths for
> itself (it takes a lot of time to configure it).

Why not export them using:

zypper lr -e repos

copy the repos.repo file to the other machine and then import the repos
using:

zypper ar -r repos

<snip>

> rsync -vidhut --bwlimit=10 --progress \
> --exclude *debug* \
> --exclude *delta* \
> --exclude *patch* \
> --exclude *-info* \
> --exclude INDEX* \
> --exclude *acroread* \
> --exclude *xen* \
> --exclude eclipse* \
> --exclude *-i18n* \
> --exclude *-html* \
> --exclude *ava* \
> --exclude OpenOffice* \
> --exclude *devel-doc* \
> --delete-excluded \
> --delete-after \
> ftp5.gwdg.de::pub/opensuse/update/11.2/rpm/i586/
> /0/sa15/comp/fix/suse112/suse-up-i586

You're skipping several packages there, which could cause possible
issues when doing an update, although I can understand a reason for
doing so. Purely out of curiosity, is there a reason for you skipping
the delta and patch packages?

Anyway, I have a daily cron job that mirrors the entire trees for the
various versions I'm using. The command used to do the mirroring in
that script[1] is:

rsync -avP \
--exclude rpm/ppc \
--exclude rpm/ppc64 \
--exclude deltas/*.ppc.delta.rpm \
--exclude deltas/*.ppc64.delta.rpm \
--safe-links \
--delete-after \
--delete-excluded \
--timeout=1800 \
"${server}/${version}" \
"${dest}/"

$server is[2]:

rsync://ftp-1.gwdg.de/pub/opensuse/update/

$dest and $version are the destination root directory, and the version
being mirrored. For my own local 11.2 update mirror the full path is on
an NFS exported directory which all my machines would see as:

/mounts/playing/share/suse/i386/update/11.2/


[0] My network is configured with a PXE boot server set up to allow
installation of all the current releases, along with a few other
network based tools, e.g. a network bootable gparted and memtest are
the main tools.

[1] My script actually reads the active mirrors, server path and server
options from a MySQL database. This makes it a lot easier to maintain
as all I need to do is set/reset a flag and a particular version is
enabled or disabled. This means there's no need for me to edit the
script to remove old versions, although new versions do require a new
entry in the table. For the curious, a copy of my script is here:

http://www.davjam.org/~davjam/linux/scripts/get-updates

[2] the reason for $server being in the database, rather than being a
part of the script, is that originally the server paths were versioned.
Having the server path stored in the database was actually quite useful
because, when openSUSE 10.3 was released, the path was changed from the
previously used path: rsync://ftp-1.gwdg.de/pub/linux/suse/suse_update/

Regards,
David Bolt

--
Team Acorn: www.distributed.net OGR-NG @ ~100Mnodes RC5-72 @ ~1Mkeys/s
openSUSE 11.0 32b | | | openSUSE 11.3M0 32b
openSUSE 11.0 64b | openSUSE 11.1 64b | openSUSE 11.2 64b |
TOS 4.02 | openSUSE 11.1 PPC | RISC OS 4.02 | RISC OS 3.11
From: script||die on
On 01/19/2010 10:57 AM, houghi wrote:
> David Bolt wrote:
>> That is a much harder task. You'd need to have a complete list of all
>> the packages installed on each machine, including any possible
>> architecture or release differences. Maintaining them would be a bit of
>> a pain so it'd probably be easier to just mirror the entire selection
>> of packages.
>
> It depends on the number of machines. I just connect them to the
> standard connection over the interweb and have everythjing I need.
> Instead of downloading a LOT that I don't need, I will download some
> things twice. I would think that I need a lot of machines (20+ at least)
> per version and architecture to make it a bit interesting.
>
> So for the majority of people just using the interweb will be the
> easiest solution:
> * Less download
> * Less configuration
> * More standard
> * Less that can go wrong
>
> Obviously there will be people who (think they) need all the stuf local.

Both of you seem to have something I don't: a level of expertise. I find
myself in a situation of a relative dimwit having been volunteered to be
'tha man' to maintain half a dozen systems for friends, family and one
benevolent community's machines all for free! If I let slip the term
'rural' that should be ispiring. I don't even have time to spill coffee
on the keyboard much less get deep into networking. I shudda stuck with
freakin windows and then NOBODY would be asking nothing of me, really,
I'm looking for a way to let everyone just fend for themselves :-(

About the local repos, some computers have no networking at all, some
are only connectable under stringent house-policies. With my 'flock'
spread out over an area the only really practical solution has beeen and
will remain portable HD and lately usb repo transport. I'll agree that
the portable HD still and by far surpasses the usb fad for any number of
reasons, but it's so easy to just plug in. The one machine I'm really
proud of is a 10.3 installed in 2003 that I haven't touched since and
which is still doing what it was intended to but it's an exception.

A lot of otherwise commendable effort has seen online istalls and
updating improve over time because it's easy and popular but maybe the
offline has been neglected somewhat.

I'll investigate all the suggestions, first chance.

> For older versions. Just download all the moment you see it will be
> going out of service. That way you only need to download it once.
>
>> A USB drive would be a better option if mirroring the updates is
>> included as there's quite often package and/or metadata changes, and
>> it's a good idea to minimise writes to USB memory sticks. Also, while a
>> USB key may (possibly) be a little faster, they aren't up to the same
>> capacity as a hard drive and are also a lot more expensive.
>
> OTOH a stick is easy to put in your pocket and several GB is not really
> an issue anymore. Also I see USB stick more as USB device. This could be
> almost anything. For me it is basically (micro)SD cards I placed in an
> adapter or my Tomtom or even camera.
>
>> Why not export them using:
>>
>> zypper lr -e repos
>>
>> copy the repos.repo file to the other machine and then import the repos
>> using:
>>
>> zypper ar -r repos
>
> Stupid zypper and all its good stuff. ;-)
> http://en.opensuse.org/Zypper/Usage/11.2 There are even cheatsheets over
> yonder.
>
>> rsync -avP \
>> --exclude rpm/ppc \
>> --exclude rpm/ppc64 \
>> --exclude deltas/*.ppc.delta.rpm \
>> --exclude deltas/*.ppc64.delta.rpm \
>> --safe-links \
>> --delete-after \
>> --delete-excluded \
>> --timeout=1800 \
>> "${server}/${version}" \
>> "${dest}/"

> Didn't you used to have a Mac?

Naaw, I was delivered in an amiga box and then went compuke-awol for years.

>
> houghi

From: David Bolt on
On Tuesday 19 Jan 2010 15:57, while playing with a tin of spray paint,
houghi painted this mural:

> David Bolt wrote:
>> That is a much harder task. You'd need to have a complete list of all
>> the packages installed on each machine, including any possible
>> architecture or release differences. Maintaining them would be a bit of
>> a pain so it'd probably be easier to just mirror the entire selection
>> of packages.
>
> It depends on the number of machines. I just connect them to the
> standard connection over the interweb and have everythjing I need.

I used to do that as well, many releases ago.

> Instead of downloading a LOT that I don't need, I will download some
> things twice. I would think that I need a lot of machines (20+ at least)
> per version and architecture to make it a bit interesting.

My mirrors are:

17892548 /local/openSUSE-11.0-GM
14038316 /local/openSUSE-11.1-GM
14161708 /local/openSUSE-11.2-GM
12931220 /local/openSUSE-11.3-GM
8756416 /local/openSUSE-11.1-ppc

so it only needs a few machines running the same version before
maintaining a mirror locally uses up less bandwidth than retrieving
virtually the same packages for each machine. My estimate was just 3 or
4 4GB installs, and having a local mirror saves me from using my adsl
connection. The fun of creating and testing various configurations
using virtual machines[0] is also another consideration. Each one of
those needs the packages fetching again, and I usually end up creating
and overwriting half a dozen with each new release.

In the above, the PPC mirror was a waste of time downloading since it's
only used for the single system.

Also, just in case you wonder why the 11.0 mirror is bigger than the
others, upto that release the PPC packages were contained within the
main repo. For 11.1 they went into their own repo, and TTBOMK, PPC
releases were discontinued after 11.1.

> So for the majority of people just using the interweb will be the
> easiest solution:
> * Less download
> * Less configuration
> * More standard
> * Less that can go wrong
>
> Obviously there will be people who (think they) need all the stuf local.

Hi :-)

Not sure if I'd fit under the know or think they need category. I think
I'd put myself under the know for a couple of versions, if you only
count real hardware. If you throw in virtual machines as well, I'd fit
under the know category for all the repos, including Factory.

> For older versions. Just download all the moment you see it will be
> going out of service. That way you only need to download it once.

Well, not quite. If you've been connecting and retrieving the packages
as required, you're going to end up re-downloading them when you build
your local mirror. In the end, it's a little quicker to grab them at
the beginning and work from your local mirror.

And, as for old mirrors that are going to be removed, I wouldn't bother
with them once all the systems relying on them have been upgraded.
Having said that, I still have a 10.2 mirror which I should have
removed months ago, and my 10.3 mirror that will get removed once I
upgrade the last of my 10.3 boxes.

>> A USB drive would be a better option if mirroring the updates is
>> included as there's quite often package and/or metadata changes, and
>> it's a good idea to minimise writes to USB memory sticks. Also, while a
>> USB key may (possibly) be a little faster, they aren't up to the same
>> capacity as a hard drive and are also a lot more expensive.
>
> OTOH a stick is easy to put in your pocket and several GB is not really
> an issue anymore.

They're a lot easier to carry about, but getting hold of an affordable
60+GB USB stick is not going to happen, at least for my definition of
affordable, for some time yet.

>> Why not export them using:
>>
>> zypper lr -e repos
>>
>> copy the repos.repo file to the other machine and then import the repos
>> using:
>>
>> zypper ar -r repos
>
> Stupid zypper and all its good stuff. ;-)

Yep. Who'd have thought that it would be possible to do such a silly
thing as exporting the list of repos so they could be imported on
another machine?

Unfortunately, one thing they've not implemented is allowing piping
data into the zypper ar command. If they had, it would allow using

zypper lr -e - >/dev/tcp/$host/$port

on the source machine and then using:

netcat -lnp $port | zypper ar -r -

on the destination machine, possibly using sed to perform version
changes during the transfer. Instead of the above, you'd need to use

netcat -lnp $port >$repo.repo ; zypper ar -r $repo.repo

on the destination machine.

> http://en.opensuse.org/Zypper/Usage/11.2 There are even cheatsheets over
> yonder.

You know, I've seen that link several times and still haven't looked at
it.

>> rsync -avP \
>> --exclude rpm/ppc \
>> --exclude rpm/ppc64 \
>> --exclude deltas/*.ppc.delta.rpm \
>> --exclude deltas/*.ppc64.delta.rpm \
>> --safe-links \
>> --delete-after \
>> --delete-excluded \
>> --timeout=1800 \
>> "${server}/${version}" \
>> "${dest}/"
>
> Didn't you used to have a Mac?

I still do but, as mentioned above, I only have a single PPC based
system and so it's a waste of time downloading PPC packages for the
various releases when it only runs 11.1. I've to actually look into
what the status is for 11.2 and beyond as I'd like to keep it running
openSUSE but may have to use Debian once 11.1 goes EOL.


[0] My latest experiment is in learning how to handle RAID arrays, and
what to do when drives fails. It's been an interesting couple of days
and, so far, I seem to be getting the hang of it. I'm even tempted to
set up one of my machines with a real RAID array as a test, just to
make sure what I've learnt with the virtual machine holds true on real
hardware. I don't expect it to be any different, although I expect it
to be a little bit faster at rebuilding the array after a failure. Once
I'm certain I can handle failures properly without data-loss, I'll end
up setting up a file server with as many drives as I can fit in it.

Regards,
David Bolt

--
Team Acorn: www.distributed.net OGR-NG @ ~100Mnodes RC5-72 @ ~1Mkeys/s
openSUSE 11.0 32b | | | openSUSE 11.3M0 32b
openSUSE 11.0 64b | openSUSE 11.1 64b | openSUSE 11.2 64b |
TOS 4.02 | openSUSE 11.1 PPC | RISC OS 4.02 | RISC OS 3.11

First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10 11 12
Prev: kdetv
Next: OSS-11.2 tor/privoxy 1/2 done