From: Groups Poster on
Colin B. wrote:
> M.H <haed98(a)excite.com> wrote:
> > Hi,
> >
> > When the server started, it drops on the maintenance shell because
it
> > can't mount the /dev/vx/rdsk/bootdg/rootvol.
>
> It's not bootdg, it's rootdg.

No it is not. With Vol manager 4.0 the bootdisk in in bootdg by
default. If you manually set it to rootdg it creates a link.

ls -l /dev/vx/dsk/bootdg/
lrwxrwxrwx 1 root root 6 May 13 06:57
/dev/vx/dsk/bootdg/ -> rootdg

It is setup to fsck/mount bootdg in the vfstab as well.

From: Groups Poster on
M.H wrote:
> Hi,
>
> When the server started, it drops on the maintenance shell because
it
> can't mount the /dev/vx/rdsk/bootdg/rootvol.

Can you please cut and paste what you have for rootvol in the vfstab.
Is it maybe trying to mount the raw device?



> bash-2.03# mount /dev/vx/rdsk/bootdg/rootvol /mnt
> mount: /dev/vx/rdsk/bootdg/rootvol not a block device

Like Darren said you should mount the dsk and not rdsk.


> bash-2.03# vxdiskadm
> VxVM vxdiskadm ERROR V-5-2-3540 Cannot create lock file
> /var/spool/locks/.DISKAD
> D.LOCK

Probably because /var isnt mounted.

>
> ================
> bash-2.03# vxprint -Ath
> Disk group: rootdg

The vxprint looks ok. Can you post an output of vxdisk list too?


>
> BR
> haed

From: Colin B. on
Groups Poster <groupsp(a)gmail.com> wrote:
> Colin B. wrote:
>> M.H <haed98(a)excite.com> wrote:
>> > Hi,
>> >
>> > When the server started, it drops on the maintenance shell because
> it
>> > can't mount the /dev/vx/rdsk/bootdg/rootvol.
>>
>> It's not bootdg, it's rootdg.
>
> No it is not. With Vol manager 4.0 the bootdisk in in bootdg by
> default. If you manually set it to rootdg it creates a link.

Be that as it may, the OP clearly has a rootdg, NOT a bootdg. Consider:

> bash-2.03# vxprint -Ath
> Disk group: rootdg
> ...

Colin
From: Groups Poster on

> Be that as it may, the OP clearly has a rootdg, NOT a bootdg.
Consider:
>
> > bash-2.03# vxprint -Ath
> > Disk group: rootdg
> > ...

Your actual disk group name may be rootdg (mine are just to be
consistent with the older VxVM) but 4.0 still uses bootdg in the
vfstab/startup scripts.

eg:

# vxdctl list
Volboot file
version: 3/1
seqno: 0.3
cluster protocol version: 50
hostid: mymachine
defaultdg: rootdg
bootdg: rootdg

# vxdg list
NAME STATE ID
rootdg enabled 1112965225.18.mymachine
T3dg enabled 1089819053.1231.mymachine

# grep rootvol /etc/vfstab
#NOTE: volume rootvol (/) encapsulated partition c1t0d0s0
/dev/vx/dsk/bootdg/rootvol /dev/vx/rdsk/bootdg/rootvol / ufs 1 no
logging

>
> Colin

From: M.H on
Hello everybody,

1. My /etc/vfstab is correct :

/dev/vx/dsk/bootdg/rootvol /dev/vx/rdsk/bootdg/rootvol /
ufs 1 no -

2. I'm using VxVM 4.0 so having bootdg in /etc/vfstab is normal.

3. Just to clarify something about my broblem:
This is a test server with two disks ( boot(c0t0d0s0 &
bootmirror(c0t1d0s0) disks). I want to test the "VERITAS[TM] Volume
Manager 3.x/4.x: Installing Kernel patches" below (when i want to
install a solaris kernel patch, i want to put the bootmirror disk
OFFLINE anf if the patching goes wrong I can boot from the bootmirror
disk as explained in the document).

- 1st problem: When I offline the bootmirror(c0t1d0s0) disk , after
I booting from cdrom, the /dev/dsk/c0t1d0s0 is not clean, I should fsck
and correct some problems.
-2nd problem: When I reboot the system from the bootmirror disk and
before "Reattaching the plexes rootvol-01, swapvol-01, to the relevant
volume and resync." (see below) , the system drops on the maintenance
shell because it can't mount /dev/vx/dsk/bootdg/rootvol as I said on the
my first mail.

Thanks for the help.


======= Begin of the document=======
Document Audience: SPECTRUM

Document ID: 79356

Title: VERITAS[TM] Volume Manager 3.x/4.x: Installing Kernel patches.

Update Date: Tue Jan 04 00:00:00 MST 2005

Products: VERITAS Volume Manager 3.5 Software, VERITAS Volume Manager
4.0 Software

Technical Areas: Administration

Last Updated By: Fiona Turnbull

Keyword(s):VxVM installing kernel patches

Description:

If a problem occurs during patching, a customer may wish to recover from
the mirrored rootdisk. Since all plexes will be in a clean state, the
procedure differs from recovering after disk failure.


Document Body:

When a customer installs Kernel patches in single-user mode, vxvol will
automatically resynchronize the data from the rootdisk to the rootmirror
as all plexes are in a clean state after a reboot.

If there has been a problem during the patching process and the customer
needs to restore from the rootmirror, they need to ensure that this
resync does not occur.

In order to achieve this, each plex on the rootmirror needs to be
off-lined. It is then possible to boot from its underlying partitions
of the root mirror and change the state of the rootdisk to detached.
Then, booting from the mirror, vxvol will mirror the data from the
rootmirror to the rootdisk.

Below is the recommended procedure when splitting a VxVM mirror for
patching.

The plexes are defined as follows:

rootvol-01
swapvol-01
var-01
usr-01


The rootmirror has the following plexes:

rootvol-02 swapvol-02 var-02 usr-02

1. Split the mirror

Offline the mirrors for the system volumes via the command line:

# vxmend off rootvol-02
# vxmend off swapvol-02
# vxmend off var-02
# vxmend off usr-02

2. Patch the software

Bring the system to the correct run-level to perform the software patching.

3. If successful, re-enable the mirrors and resync them:

# vxmend on rootvol-02
# vxmend on swapvol-02
# vxmend on var-02
# vxmend on usr-02
# vxrecover -s


4. If unsuccesful, use the procedure below.

*
Boot the system

OBP> boot cdrom -sw


*

After mounting the root partition of the rootmirror disk, backup
and edit the /etc/vfstab file. Replace the /dev/vx entries with
underlying slice entries for all filesystems that reside on the system disk.
*

Backup and edit the /etc/system file, removing the following two
entries:

rootdev:/pseudo/vxio@0:0
set vxio:vol_rootdev_is_volume=1


*

Touch the /etc/vx/reconfig.d/state.d/install-db file to prevent
VxVM from starting on the next boot from this disk.
*

Shutdown and explicitly boot from the mirror disk.
*

Start-up VxVM processes manually.

# rm /etc/vx/reconfig.d/state.d/install-db
# vxiod set 10
# vxconfigd -d
# vxdctl init <hostname>
# vxdctl enable


*
Change the states of the volumes rootvol, swapvol and var so that
rootvol-01, swapvol-01, var-01 and usr-01 plexes are detached, and
rootvol-02, swapvol-02, var-02 and usr-02 plexes are now the only clean
plex in each of the volumes.

# vxplex -f det rootvol-01
# vxmend on rootvol-02
# vxmend fix clean rootvol-02


*
Restore the saved /etc/vfstab and /etc/system files.

*

Reboot the system, booting explicitly from the mirror disk again.
*

Reattach the plexes rootvol-01, swapvol-01, var-01 and usr-01 to
the relevant volume and resync.

# vxplex att rootvol-01 rootvol-02
# vxplex att swapvol-01 swapvol-02
# vxplex att var-01 var-02
# vxplex att usr-02 usr-02
===END of the Document===========

BR

haed.

Groups Poster wrote:
> M.H wrote:
>
>>Hi,
>>
>>When the server started, it drops on the maintenance shell because
>
> it
>
>>can't mount the /dev/vx/rdsk/bootdg/rootvol.
>
>
> Can you please cut and paste what you have for rootvol in the vfstab.
> Is it maybe trying to mount the raw device?
>
>
>
>
>>bash-2.03# mount /dev/vx/rdsk/bootdg/rootvol /mnt
>>mount: /dev/vx/rdsk/bootdg/rootvol not a block device
>
>
> Like Darren said you should mount the dsk and not rdsk.
>
>
>
>>bash-2.03# vxdiskadm
>>VxVM vxdiskadm ERROR V-5-2-3540 Cannot create lock file
>>/var/spool/locks/.DISKAD
>>D.LOCK
>
>
> Probably because /var isnt mounted.
>
>
>>================
>>bash-2.03# vxprint -Ath
>>Disk group: rootdg
>
>
> The vxprint looks ok. Can you post an output of vxdisk list too?
>
>
>
>>BR
>>haed
>
>

First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4
Prev: SMC and JAVA error
Next: Solaris 10 as openldap client