From: David Bolt on
On Tuesday 19 Jan 2010 16:55, while playing with a tin of spray paint,
script||die painted this mural:

> On 01/19/2010 10:57 AM, houghi wrote:
>> David Bolt wrote:
>>> That is a much harder task. You'd need to have a complete list of all
>>> the packages installed on each machine, including any possible
>>> architecture or release differences. Maintaining them would be a bit of
>>> a pain so it'd probably be easier to just mirror the entire selection
>>> of packages.
>>
>> It depends on the number of machines. I just connect them to the
>> standard connection over the interweb and have everythjing I need.
>> Instead of downloading a LOT that I don't need, I will download some
>> things twice. I would think that I need a lot of machines (20+ at least)
>> per version and architecture to make it a bit interesting.

<snip>

> Both of you seem to have something I don't: a level of expertise.

Don't worry. You'll pick it up over time and there's always this group
and the openSUSE mailing lists and forums if you get stuck.

> I find
> myself in a situation of a relative dimwit having been volunteered to be
> 'tha man' to maintain half a dozen systems for friends, family and one
> benevolent community's machines all for free!

Hope you at least get a decent cuppa while doing this. Years ago, while
acting as tech support for my mother, I had to bring my own tea bags.
It was either that or drink herbal tea, and I haven't tasted a single
herbal tea that I could drink.

> If I let slip the term
> 'rural' that should be ispiring. I don't even have time to spill coffee
> on the keyboard much less get deep into networking.

If all you're doing is carting around a copy of the update mirror, you
don't need to know a lot about networking. Just enough to be able to
create your mirror and how to configure the other machines to use it
when it's available.

> I shudda stuck with
> freakin windows and then NOBODY would be asking nothing of me, really,
> I'm looking for a way to let everyone just fend for themselves :-(

Once you get the monicker of "computer guy", it can be hard to get rid
of it. I've managed to stop most of the people that used to ask me for
help, but still have to contend with three. Knowing I've switched
completely to Linux means two of them rarely ask for help, since
they're still using WinXP and are unwilling to change.

> About the local repos, some computers have no networking at all, some
> are only connectable under stringent house-policies. With my 'flock'
> spread out over an area the only really practical solution has beeen and
> will remain portable HD and lately usb repo transport. I'll agree that
> the portable HD still and by far surpasses the usb fad for any number of
> reasons, but it's so easy to just plug in.

> The one machine I'm really
> proud of is a 10.3 installed in 2003 that I haven't touched since and
> which is still doing what it was intended to but it's an exception.

I had one like that[0]. It was running 9.3 since it was released and, up
to the motherboard failing, needed little to no maintenance. The only
reason that machine is no longer running 9.3 is because the chipset was
unsupported. In fact, I ended up with 11.1 on that box because I
couldn't even install 11.0 on it. Now it's doing the same job as before
and, with luck, will need the same level of care.

> A lot of otherwise commendable effort has seen online istalls and
> updating improve over time because it's easy and popular but maybe the
> offline has been neglected somewhat.

You can do offline updates almost as easily. Create a mirror and point
YaST/zypper at it. Don't enable automatic updates or there will be
complaints about missing repos.

One thing you could do is to make sure the names for the update repos
is the same on all the machines, no matter which version they're
running, and then have a small script on the USB stick/portable drive
that enables the update repo, refreshes it, does the update, and then
disables it again.

Something like this should work, although you may need to do some
testing:

#!/bin/bash

# pick a standard repo name
#
UPDATE_REPO_NAME="updates"

VERSION=$(zypper -V)
VERSION=${VERSION#* }

AUTO=""
NOAUTO=""

# need to know which version of zypper is in use as some options
# changed between v0.8 and 0.11
#
case "${VERSION}" in
0.8.*) # 10.3
AUTO="-a"
NOAUTO="---disable-autorefresh"
;;
0.11.*) # 11.0
AUTO="-r"
NOAUTO="-R"
;;
1.*) # 11.1/11.2
AUTO="-r"
NOAUTO="-R"
;;
esac


# enable the update repo, and turn on auto-refresh
#
zypper mr -e "${AUTO}" "${UPDATE_REPO_NAME}" || exit 1

# this will refresh the repo(s) and do the update
#
zypper up

# by using the line below instead of the one above,
# you could almost completely automate the update
#
# this would only fail if there are conflicts that zypper can't solve
# without intervention
#
# zypper up -y -r "${UPDATE_REPO_NAME}"

# and now we disable the repo, and turn off the auto-refresh
#
zypper mr -d "${NOAUTO}" "${UPDATE_REPO_NAME}" || exit 1

exit 0


>> Didn't you used to have a Mac?
>
> Naaw, I was delivered in an amiga box and then went compuke-awol for years.

Never really liked the Amigas. I could use them, and even did on a few
occasions, but always ended up going back to Atari systems[1] which I
much preferred.


[0] Well, two systems. I've still got an old system running SuSE 9.1.
As for it's job, I can't remember what it originally did. I used to use
it as a laptop but haven't done so in years. Nowadays, the only thing
it does is contribute to my distributed.net key rates.

[1] Is this where we get to have another Atari/Amiga flame-war? I do
sometimes miss the good old days, and being able to throw in comments
about how RISCOS is better than both their GUIs was a nice way to stir
the pot :-)

Regards,
David Bolt

--
Team Acorn: www.distributed.net OGR-NG @ ~100Mnodes RC5-72 @ ~1Mkeys/s
openSUSE 11.0 32b | | | openSUSE 11.3M0 32b
openSUSE 11.0 64b | openSUSE 11.1 64b | openSUSE 11.2 64b |
TOS 4.02 | openSUSE 11.1 PPC | RISC OS 4.02 | RISC OS 3.11
From: David Bolt on
On Tuesday 19 Jan 2010 20:09, while playing with a tin of spray paint,
houghi painted this mural:

> David Bolt wrote:
>> My mirrors are:
>>
>> 17892548 /local/openSUSE-11.0-GM
>> 14038316 /local/openSUSE-11.1-GM
>> 14161708 /local/openSUSE-11.2-GM
>> 12931220 /local/openSUSE-11.3-GM
>> 8756416 /local/openSUSE-11.1-ppc
>
> There is a -h option, you know. ;-)

That makes it too easy.

> So a rough calculation: about 60GB.

About 59GB with just the x86 repos, 68GB if the PPC repos are added,
and almost 100GB if the 10.2 and 10.3 repos are added. And that doesn't
include the update mirrors, which themselves occupy a further 80GB.

>> so it only needs a few machines running the same version before
>> maintaining a mirror locally uses up less bandwidth than retrieving
>> virtually the same packages for each machine. My estimate was just 3 or
>> 4 4GB installs, and having a local mirror saves me from using my adsl
>> connection.
>
> It all obviously depends on how much you install. I was also thinking
> just about the later updates, not about the instalation itself, so
> indeed a lot less PCs.
>
> My installs are around 2.5GB

My full installs are usually larger than 4GB, mainly because I add
almost all of KDE and many of the development packages. I think the
highest I've done was a little short of 6.5GB.

Now I don't bother with adding most of the development packages. I tend
to just write a spec file and use the build script to build a package
in a chroot environment.

>> Also, just in case you wonder why the 11.0 mirror is bigger than the
>> others, upto that release the PPC packages were contained within the
>> main repo. For 11.1 they went into their own repo, and TTBOMK, PPC
>> releases were discontinued after 11.1.
>
> Also things are taken out later on and placed in other repos (OBS)

There is that as well. It will be interesting to compare the final size
of 11.3 to 11.2. At the moment it's 1.2GB smaller, but there's a while
to go before it's finally released.

>> Well, not quite. If you've been connecting and retrieving the packages
>> as required, you're going to end up re-downloading them when you build
>> your local mirror. In the end, it's a little quicker to grab them at
>> the beginning and work from your local mirror.
>
> OK. As I said, I was not thinking about the instalation, because I
> always had the boxed set available (Just not this time, strange)

I had wondered wondered about that.

>> And, as for old mirrors that are going to be removed, I wouldn't bother
>> with them once all the systems relying on them have been upgraded.
>
> Yes. I was thinking if you would want to keep a system running, it would
> be better to have the rpm files available.

Definitely. If only to be able to back out of an upgrade.

>> Having said that, I still have a 10.2 mirror which I should have
>> removed months ago, and my 10.3 mirror that will get removed once I
>> upgrade the last of my 10.3 boxes.
>
> rm /mirror/10.2 -rf
> Just helping. :-D



>> They're a lot easier to carry about, but getting hold of an affordable
>> 60+GB USB stick is not going to happen, at least for my definition of
>> affordable, for some time yet.
>
> iPod? My video camera has 60GB and USB.

An iPod or video camera aren't really USB sticks. They're closer to the
removable drives.

>> Unfortunately, one thing they've not implemented is allowing piping
>> data into the zypper ar command. If they had, it would allow using
>>
>> zypper lr -e - >/dev/tcp/$host/$port
>
> Not sure what you want to do. I do not even have /dev/tcp/*

Bash gives you the pseudo devices /dev/tcp and /dev/udp. To use them,
you send the output to /dev/tcp/$host/$port , where $host is either the
dotted quad or the host name, and $port is the destination port name.
On the receiving end, you can use netcat to receive the data being
transmitted.

A quick example is to open two consoles. In one type:

netcat -lnp 32123

and in the second type:

ls -l >/dev/tcp/localhost/32123

The first console should get the output from ls.


I quite often use it when archiving up my backups. I archive using:

tar cvf - $some_mask >/dev/tcp/$dest_host/$port

and at the receiving end, either use:

netcat -lnp $port | ( cd $dest_dir; tar xvf - )

to unpack it into a directory elsewhere, or split it into 95MB chunks
ready to archive to DVD using:

netcat -lnp $port | split -d -b 100431872 -a 3 - \
"$base_path/backup - $(date +'%Y-%m-%d_%H').tar."

Once this is complete, I run a script that creates multiple
directories, moves the chunks into them while trying to limit the size
to 3.5GB, then it runs par2cmdline to create about 500MB of recovery
data just in case there's a future error while reading back from the
discs. Finally, it creates a DVD ISO from the directory ready to burn
to disc.

>>> http://en.opensuse.org/Zypper/Usage/11.2 There are even cheatsheets over
>>> yonder.
>>
>> You know, I've seen that link several times and still haven't looked at
>> it.
>
> There are some nice examples there:
> zypper in amarok packman:libxine1 # install libxine1 from packman and
> # amarok from any repo

That was something I didn't know. I think I'll have to go and have a
read. If I'm lucky, something might sink in.

>> [0] My latest experiment is in learning how to handle RAID arrays, and
>> what to do when drives fails. It's been an interesting couple of days
>> and, so far, I seem to be getting the hang of it.
>
> Try the following:

> 4 1TB disks Raid 5. 8GB swap 100MB /boot and all the rest on /
> Now you have diskspace more then 2TB. Have fun. :-(

4 1TB drives, minus the 8.1GB, should give you a little bit less than
3TB of space, if I got my maths right.

> We ended up making a 2TB / and about 770GB /backup

The way I'm eventually looking to go is to start with 3 1TB drives.
I'll use a separate IDE drive for swap and maybe to hold some stuff
that doesn't matter, possibly /home. Then, I'll partition the drives
to give me two partitions, one of 3GB and the other having the
remainder. The first partition on each drive I'll set up as a RAID 1
and have it contain / and /boot. The second partition on each drive
I'll add to a RAID 5, create a volume group and use LVM to add a single
XFS partition.

All I need to do is find out if I can add drives later to make the
/dev/mdx grow, and so be able to enlarge the volume group and the
partition within it.

> It is a nice machine. 16 core. Real HW raid. KVM over IP. It runs Fedora.

You haven't persuaded them to swap to openSUSE quite yet?

> Dual PSU. 1U
> And that KVM over IP is already nice to play with.

Sounds nice.

> Ok, it was a bit more expensive then your standard PC.

I can imagine.

> It also makes a
> LOT more noise.

Not at all surprised.

> Luckily it is in the datacenter together with two of its
> smaller brothers.

:-)


Regards,
David Bolt

--
Team Acorn: www.distributed.net OGR-NG @ ~100Mnodes RC5-72 @ ~1Mkeys/s
openSUSE 11.0 32b | | | openSUSE 11.3M0 32b
openSUSE 11.0 64b | openSUSE 11.1 64b | openSUSE 11.2 64b |
TOS 4.02 | openSUSE 11.1 PPC | RISC OS 4.02 | RISC OS 3.11
From: Stephen Horne on
On Tue, 19 Jan 2010 20:33:08 +0100, houghi <houghi(a)houghi.org.invalid>
wrote:

>> I'm looking for a way to let everyone just fend for themselves :-(
>
>If you realy want to get out, start charging money.

I have to agree here. Helping family, friends and community I'm all
for. Letting it take over your life is something else.

As long as you're doing it for free, no-one is ever likely to become
even remotely self-sufficient. Why bother? There'll be appreciation
and compliments on your amazing ability etc, of course. But even if
you try to instruct people while solving their problems, all you'll
get is that blank look.

It will never even occur to them to listen to you - it's just too
convenient to assume you're just fishing for praise. You won't get
"can you explain that point again". You'll get "wow, you're really
clever, I could never learn to use a computer like that". And that's
just when you're showing them the "on" button.

In the long run, the best thing you can do - for yourself *AND* for
them - is to force them to become more self sufficient.

And at the end of the day, if it isn't worth their time to learn this
stuff for their own benefit, how come it's worth wasting your time to
do it for them over and over again?

Baby may be distressed when first refused mothers milk, but there's a
time to start eating solids, and even a time to start buying and
cooking your meals for yourself.

From: Stephen Horne on

Thanks everyone - particular for the rsync, zypper etc tips.

From: script||die on
On 01/19/2010 02:33 PM, houghi wrote:
> script||die wrote:
>
>> I'm looking for a way to let everyone just fend for themselves :-(
>
> If you realy want to get out, start charging money.

hehehe

>> About the local repos, some computers have no networking at all, some
>> are only connectable under stringent house-policies.
>
> Then you must ask yourself why you would need these repos local. You do
> not need to update, as updates are security updates and those do not
> apply. Give them all the official DVD and they will have almost all of
> the software they will need.
> If there are other programs they need, just download the RPM and install
> that directly. Better is often just to see if there is an alternative
> that IS on the DVD.

I'll do my nixon bit here.

Let me make this perfectly clear: some of the people I help out are in
an old folks home. I explain things to them a thousand times and when I
come back a week later they greet me with a huge thankful smile asking
only "who are YOU?".

Give'em DVD's? :-))))))))

>> A lot of otherwise commendable effort has seen online istalls and
>> updating improve over time because it's easy and popular but maybe the
>> offline has been neglected somewhat.
>
> Well, that is because there is less and less need for it.

I'll beg to differ on that one any day, at least from my end.

> First if you won't have any connection later on, download the DVD. That
> will give you a LOT of software.
>
> An install is however much easier done if there is a network connection,
> as then it is trivial to install e.g. MPlayer to watch movies. This only
> needs to be done once.

Well believe me I have a good feel for what I need and offline
maintenance via a local hard repo that I can move around is it.

Obliquely related, I'm trying something new now, a kinda traveling home
folder full of preconfigured files but otherwise really minimal that
once placed just before yast creates that user result in most of the
look & feel I want without having to go through the ritual everywhere.
I'll see how this turns out then rethink as necessary.

First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13
Prev: kdetv
Next: OSS-11.2 tor/privoxy 1/2 done