Prev: [PATCH 1/3] drivers: isdn: use kernel macros to convert hex digit
Next: AppArmor security module
From: Michael S. Tsirkin on 15 Jul 2010 08:50 On Thu, Jul 15, 2010 at 03:19:12PM +0300, Michael S. Tsirkin wrote: > We flush under vq mutex when changing backends. > This creates a deadlock as workqueue being flushed > needs this lock as well. > > https://bugzilla.redhat.com/show_bug.cgi?id=612421 > > Drop the vq mutex before flush: we have the device mutex > which is sufficient to prevent another ioctl from touching > the vq. > > Signed-off-by: Michael S. Tsirkin <mst(a)redhat.com> Dave, just to clarify, I'll send pull request to merge it through my tree, there's no need for you to bother with this. > --- > drivers/vhost/net.c | 5 +++++ > 1 files changed, 5 insertions(+), 0 deletions(-) > > diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c > index 28d7786..50df58e6 100644 > --- a/drivers/vhost/net.c > +++ b/drivers/vhost/net.c > @@ -534,11 +534,16 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd) > rcu_assign_pointer(vq->private_data, sock); > vhost_net_enable_vq(n, vq); > done: > + mutex_unlock(&vq->mutex); > + > if (oldsock) { > vhost_net_flush_vq(n, index); > fput(oldsock->file); > } > > + mutex_unlock(&n->dev.mutex); > + return 0; > + > err_vq: > mutex_unlock(&vq->mutex); > err: > -- > 1.7.2.rc0.14.g41c1c -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Sridhar Samudrala on 15 Jul 2010 14:20 On Thu, 2010-07-15 at 15:19 +0300, Michael S. Tsirkin wrote: > We flush under vq mutex when changing backends. > This creates a deadlock as workqueue being flushed > needs this lock as well. > > https://bugzilla.redhat.com/show_bug.cgi?id=612421 > > Drop the vq mutex before flush: we have the device mutex > which is sufficient to prevent another ioctl from touching > the vq. Why do we need to flush the vq when trying to set the backend and we find that it is already set. Is this just an optimization? Thanks Sridhar > > Signed-off-by: Michael S. Tsirkin <mst(a)redhat.com> > --- > drivers/vhost/net.c | 5 +++++ > 1 files changed, 5 insertions(+), 0 deletions(-) > > diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c > index 28d7786..50df58e6 100644 > --- a/drivers/vhost/net.c > +++ b/drivers/vhost/net.c > @@ -534,11 +534,16 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd) > rcu_assign_pointer(vq->private_data, sock); > vhost_net_enable_vq(n, vq); > done: > + mutex_unlock(&vq->mutex); > + > if (oldsock) { > vhost_net_flush_vq(n, index); > fput(oldsock->file); > } > > + mutex_unlock(&n->dev.mutex); > + return 0; > + > err_vq: > mutex_unlock(&vq->mutex); > err: -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
|
Pages: 1 Prev: [PATCH 1/3] drivers: isdn: use kernel macros to convert hex digit Next: AppArmor security module |