Discussion:
[RFC 0/4] rpmsg: Fix init of DMA:able virtqueues
Edgar E. Iglesias
2015-05-01 05:01:43 UTC
Permalink
From: "Edgar E. Iglesias" <***@xilinx.com>

I'm trying to run rpmsg and remoteproc on the ZynqMP (arm64) but I'm hitting
a DMA/mm error. The issue was discussed here:
http://lists.infradead.org/pipermail/linux-arm-kernel/2015-March/333050.html

Russel King pointed out that the arm64 is not doing anything wrong by
returning vmapped memory (which is incompatible with sg_phys()). Hence this
RFC series that tries to illustrate/fix the problem in rpmsg/virtio.

Is this going in the right direction?
Any ideas or suggestions on how to better fix this?

Something that worries me a little is that it would be nice if the DMA
capability for virtio protocols was not hardcoded like this but rather
somehow selectable by the framework. I was hoping that it would be possible
to use _any_ virtio based protocol to communicate with remote-proc/DMA and
not just rpmsg.

Thanks,
Edgar


Edgar E. Iglesias (4):
virtio_ring: Break out vring descriptor setup code
virtio_ring: Add option for DMA mapped sgs in virtqueue_add
virtio: Add dma variants of virtqueue_add_in and outbuf
rpmsg: DMA map sgs passed to virtio

drivers/rpmsg/virtio_rpmsg_bus.c | 28 ++++++++++++++++-----
drivers/virtio/virtio_ring.c | 53 +++++++++++++++++++++++++++++++---------
include/linux/virtio.h | 10 ++++++++
3 files changed, 74 insertions(+), 17 deletions(-)
--
1.9.1
Edgar E. Iglesias
2015-05-01 05:01:44 UTC
Permalink
From: "Edgar E. Iglesias" <***@xilinx.com>

Break out descriptor setup into a separate inline function.
No functional change.

Signed-off-by: Edgar E. Iglesias <***@xilinx.com>
---
drivers/virtio/virtio_ring.c | 20 ++++++++++++++------
1 file changed, 14 insertions(+), 6 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 096b857..66d2cb9 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -120,6 +120,16 @@ static struct vring_desc *alloc_indirect(struct virtqueue *_vq,
return desc;
}

+static inline void vring_desc_set(struct virtio_device *vdev,
+ struct vring_desc *desc,
+ struct scatterlist *sg,
+ unsigned int flags)
+{
+ desc->flags = cpu_to_virtio16(vdev, flags);
+ desc->addr = cpu_to_virtio64(vdev, sg_phys(sg));
+ desc->len = cpu_to_virtio32(vdev, sg->length);
+}
+
static inline int virtqueue_add(struct virtqueue *_vq,
struct scatterlist *sgs[],
unsigned int total_sg,
@@ -205,18 +215,16 @@ static inline int virtqueue_add(struct virtqueue *_vq,

for (n = 0; n < out_sgs; n++) {
for (sg = sgs[n]; sg; sg = sg_next(sg)) {
- desc[i].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_NEXT);
- desc[i].addr = cpu_to_virtio64(_vq->vdev, sg_phys(sg));
- desc[i].len = cpu_to_virtio32(_vq->vdev, sg->length);
+ vring_desc_set(_vq->vdev, desc + i, sg,
+ VRING_DESC_F_NEXT);
prev = i;
i = virtio16_to_cpu(_vq->vdev, desc[i].next);
}
}
for (; n < (out_sgs + in_sgs); n++) {
for (sg = sgs[n]; sg; sg = sg_next(sg)) {
- desc[i].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_NEXT | VRING_DESC_F_WRITE);
- desc[i].addr = cpu_to_virtio64(_vq->vdev, sg_phys(sg));
- desc[i].len = cpu_to_virtio32(_vq->vdev, sg->length);
+ vring_desc_set(_vq->vdev, desc + i, sg,
+ VRING_DESC_F_NEXT | VRING_DESC_F_WRITE);
prev = i;
i = virtio16_to_cpu(_vq->vdev, desc[i].next);
}
--
1.9.1
Edgar E. Iglesias
2015-05-01 05:01:45 UTC
Permalink
From: "Edgar E. Iglesias" <***@xilinx.com>

Signed-off-by: Edgar E. Iglesias <***@xilinx.com>
---
drivers/virtio/virtio_ring.c | 25 +++++++++++++++----------
1 file changed, 15 insertions(+), 10 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 66d2cb9..233f3c6 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -123,11 +123,13 @@ static struct vring_desc *alloc_indirect(struct virtqueue *_vq,
static inline void vring_desc_set(struct virtio_device *vdev,
struct vring_desc *desc,
struct scatterlist *sg,
- unsigned int flags)
+ unsigned int flags,
+ bool dma)
{
desc->flags = cpu_to_virtio16(vdev, flags);
- desc->addr = cpu_to_virtio64(vdev, sg_phys(sg));
- desc->len = cpu_to_virtio32(vdev, sg->length);
+ desc->addr = cpu_to_virtio64(vdev,
+ dma ? sg_dma_address(sg) : sg_phys(sg));
+ desc->len = cpu_to_virtio32(vdev, dma ? sg_dma_len(sg) : sg->length);
}

static inline int virtqueue_add(struct virtqueue *_vq,
@@ -136,7 +138,8 @@ static inline int virtqueue_add(struct virtqueue *_vq,
unsigned int out_sgs,
unsigned int in_sgs,
void *data,
- gfp_t gfp)
+ gfp_t gfp,
+ bool dma)
{
struct vring_virtqueue *vq = to_vvq(_vq);
struct scatterlist *sg;
@@ -174,7 +177,7 @@ static inline int virtqueue_add(struct virtqueue *_vq,

/* If the host supports indirect descriptor tables, and we have multiple
* buffers, then go indirect. FIXME: tune this threshold */
- if (vq->indirect && total_sg > 1 && vq->vq.num_free)
+ if (!dma && vq->indirect && total_sg > 1 && vq->vq.num_free)
desc = alloc_indirect(_vq, total_sg, gfp);
else
desc = NULL;
@@ -216,7 +219,7 @@ static inline int virtqueue_add(struct virtqueue *_vq,
for (n = 0; n < out_sgs; n++) {
for (sg = sgs[n]; sg; sg = sg_next(sg)) {
vring_desc_set(_vq->vdev, desc + i, sg,
- VRING_DESC_F_NEXT);
+ VRING_DESC_F_NEXT, dma);
prev = i;
i = virtio16_to_cpu(_vq->vdev, desc[i].next);
}
@@ -224,7 +227,8 @@ static inline int virtqueue_add(struct virtqueue *_vq,
for (; n < (out_sgs + in_sgs); n++) {
for (sg = sgs[n]; sg; sg = sg_next(sg)) {
vring_desc_set(_vq->vdev, desc + i, sg,
- VRING_DESC_F_NEXT | VRING_DESC_F_WRITE);
+ VRING_DESC_F_NEXT | VRING_DESC_F_WRITE,
+ dma);
prev = i;
i = virtio16_to_cpu(_vq->vdev, desc[i].next);
}
@@ -292,7 +296,8 @@ int virtqueue_add_sgs(struct virtqueue *_vq,
for (sg = sgs[i]; sg; sg = sg_next(sg))
total_sg++;
}
- return virtqueue_add(_vq, sgs, total_sg, out_sgs, in_sgs, data, gfp);
+ return virtqueue_add(_vq, sgs, total_sg, out_sgs, in_sgs, data, gfp,
+ false);
}
EXPORT_SYMBOL_GPL(virtqueue_add_sgs);

@@ -314,7 +319,7 @@ int virtqueue_add_outbuf(struct virtqueue *vq,
void *data,
gfp_t gfp)
{
- return virtqueue_add(vq, &sg, num, 1, 0, data, gfp);
+ return virtqueue_add(vq, &sg, num, 1, 0, data, gfp, false);
}
EXPORT_SYMBOL_GPL(virtqueue_add_outbuf);

@@ -336,7 +341,7 @@ int virtqueue_add_inbuf(struct virtqueue *vq,
void *data,
gfp_t gfp)
{
- return virtqueue_add(vq, &sg, num, 0, 1, data, gfp);
+ return virtqueue_add(vq, &sg, num, 0, 1, data, gfp, false);
}
EXPORT_SYMBOL_GPL(virtqueue_add_inbuf);
--
1.9.1
Edgar E. Iglesias
2015-05-01 05:01:46 UTC
Permalink
From: "Edgar E. Iglesias" <***@xilinx.com>

Signed-off-by: Edgar E. Iglesias <***@xilinx.com>
---
drivers/virtio/virtio_ring.c | 18 ++++++++++++++++++
include/linux/virtio.h | 10 ++++++++++
2 files changed, 28 insertions(+)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 233f3c6..dd6c640 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -323,6 +323,15 @@ int virtqueue_add_outbuf(struct virtqueue *vq,
}
EXPORT_SYMBOL_GPL(virtqueue_add_outbuf);

+int dma_virtqueue_add_outbuf(struct virtqueue *vq,
+ struct scatterlist *sg, unsigned int num,
+ void *data,
+ gfp_t gfp)
+{
+ return virtqueue_add(vq, &sg, num, 1, 0, data, gfp, true);
+}
+EXPORT_SYMBOL_GPL(dma_virtqueue_add_outbuf);
+
/**
* virtqueue_add_inbuf - expose input buffers to other end
* @vq: the struct virtqueue we're talking about.
@@ -345,6 +354,15 @@ int virtqueue_add_inbuf(struct virtqueue *vq,
}
EXPORT_SYMBOL_GPL(virtqueue_add_inbuf);

+int dma_virtqueue_add_inbuf(struct virtqueue *vq,
+ struct scatterlist *sg, unsigned int num,
+ void *data,
+ gfp_t gfp)
+{
+ return virtqueue_add(vq, &sg, num, 0, 1, data, gfp, true);
+}
+EXPORT_SYMBOL_GPL(dma_virtqueue_add_inbuf);
+
/**
* virtqueue_kick_prepare - first half of split virtqueue_kick call.
* @vq: the struct virtqueue
diff --git a/include/linux/virtio.h b/include/linux/virtio.h
index 8f4d4bf..c31b3e2 100644
--- a/include/linux/virtio.h
+++ b/include/linux/virtio.h
@@ -34,6 +34,16 @@ struct virtqueue {
void *priv;
};

+int dma_virtqueue_add_outbuf(struct virtqueue *vq,
+ struct scatterlist sg[], unsigned int num,
+ void *data,
+ gfp_t gfp);
+
+int dma_virtqueue_add_inbuf(struct virtqueue *vq,
+ struct scatterlist sg[], unsigned int num,
+ void *data,
+ gfp_t gfp);
+
int virtqueue_add_outbuf(struct virtqueue *vq,
struct scatterlist sg[], unsigned int num,
void *data,
--
1.9.1
Edgar E. Iglesias
2015-05-01 05:01:47 UTC
Permalink
From: "Edgar E. Iglesias" <***@xilinx.com>

Signed-off-by: Edgar E. Iglesias <***@xilinx.com>
---
drivers/rpmsg/virtio_rpmsg_bus.c | 28 ++++++++++++++++++++++------
1 file changed, 22 insertions(+), 6 deletions(-)

diff --git a/drivers/rpmsg/virtio_rpmsg_bus.c b/drivers/rpmsg/virtio_rpmsg_bus.c
index 73354ee..9ae53a0 100644
--- a/drivers/rpmsg/virtio_rpmsg_bus.c
+++ b/drivers/rpmsg/virtio_rpmsg_bus.c
@@ -210,6 +210,22 @@ static void __ept_release(struct kref *kref)
kfree(ept);
}

+static inline dma_addr_t msg_dma_address(struct virtproc_info *vrp, void *msg)
+{
+ unsigned long offset = msg - vrp->rbufs;
+
+ return vrp->bufs_dma + offset;
+}
+
+static inline void rpmsg_msg_sg_init(struct virtproc_info *vrp,
+ struct scatterlist *sg,
+ void *msg, unsigned int len)
+{
+ sg_init_table(sg, 1);
+ sg_dma_address(sg) = msg_dma_address(vrp, msg);
+ sg_dma_len(sg) = len;
+}
+
/* for more info, see below documentation of rpmsg_create_ept() */
static struct rpmsg_endpoint *__rpmsg_create_ept(struct virtproc_info *vrp,
struct rpmsg_channel *rpdev, rpmsg_rx_cb_t cb,
@@ -754,12 +770,12 @@ int rpmsg_send_offchannel_raw(struct rpmsg_channel *rpdev, u32 src, u32 dst,
print_hex_dump(KERN_DEBUG, "rpmsg_virtio TX: ", DUMP_PREFIX_NONE, 16, 1,
msg, sizeof(*msg) + msg->len, true);

- sg_init_one(&sg, msg, sizeof(*msg) + len);
+ rpmsg_msg_sg_init(vrp, &sg, msg, sizeof(*msg) + len);

mutex_lock(&vrp->tx_lock);

/* add message to the remote processor's virtqueue */
- err = virtqueue_add_outbuf(vrp->svq, &sg, 1, msg, GFP_KERNEL);
+ err = dma_virtqueue_add_outbuf(vrp->svq, &sg, 1, msg, GFP_KERNEL);
if (err) {
/*
* need to reclaim the buffer here, otherwise it's lost
@@ -828,10 +844,10 @@ static int rpmsg_recv_single(struct virtproc_info *vrp, struct device *dev,
dev_warn(dev, "msg received with no recipient\n");

/* publish the real size of the buffer */
- sg_init_one(&sg, msg, RPMSG_BUF_SIZE);
+ rpmsg_msg_sg_init(vrp, &sg, msg, RPMSG_BUF_SIZE);

/* add the buffer back to the remote processor's virtqueue */
- err = virtqueue_add_inbuf(vrp->rvq, &sg, 1, msg, GFP_KERNEL);
+ err = dma_virtqueue_add_inbuf(vrp->rvq, &sg, 1, msg, GFP_KERNEL);
if (err < 0) {
dev_err(dev, "failed to add a virtqueue buffer: %d\n", err);
return err;
@@ -1007,9 +1023,9 @@ static int rpmsg_probe(struct virtio_device *vdev)
struct scatterlist sg;
void *cpu_addr = vrp->rbufs + i * RPMSG_BUF_SIZE;

- sg_init_one(&sg, cpu_addr, RPMSG_BUF_SIZE);
+ rpmsg_msg_sg_init(vrp, &sg, cpu_addr, RPMSG_BUF_SIZE);

- err = virtqueue_add_inbuf(vrp->rvq, &sg, 1, cpu_addr,
+ err = dma_virtqueue_add_inbuf(vrp->rvq, &sg, 1, cpu_addr,
GFP_KERNEL);
WARN_ON(err); /* sanity check; this can't really happen */
}
--
1.9.1
Rusty Russell
2015-05-06 06:21:48 UTC
Permalink
First off, I have handed maintainership off to Michael S. Tsirkin, so
his word is now law.

That said... there's nothing fundamentally *wrong* with this, but it's
not how standard virtio works. We decided some time ago that as we're
paravirtualized, we would not be doing address mapping.

rpmsg uses virtio, but it's with a twist: they're not talking to a
host. Thus my preference, in order, would be:

1) Don't use non-kmalloc addresses.
2) If that's not possible, call these _dma interfaces _rpmsg instead,
so normal virtio users don't get confused and try to use them.

Cheers,
Rusty.
Post by Edgar E. Iglesias
---
drivers/rpmsg/virtio_rpmsg_bus.c | 28 ++++++++++++++++++++++------
1 file changed, 22 insertions(+), 6 deletions(-)
diff --git a/drivers/rpmsg/virtio_rpmsg_bus.c b/drivers/rpmsg/virtio_rpmsg_bus.c
index 73354ee..9ae53a0 100644
--- a/drivers/rpmsg/virtio_rpmsg_bus.c
+++ b/drivers/rpmsg/virtio_rpmsg_bus.c
@@ -210,6 +210,22 @@ static void __ept_release(struct kref *kref)
kfree(ept);
}
+static inline dma_addr_t msg_dma_address(struct virtproc_info *vrp, void *msg)
+{
+ unsigned long offset = msg - vrp->rbufs;
+
+ return vrp->bufs_dma + offset;
+}
+
+static inline void rpmsg_msg_sg_init(struct virtproc_info *vrp,
+ struct scatterlist *sg,
+ void *msg, unsigned int len)
+{
+ sg_init_table(sg, 1);
+ sg_dma_address(sg) = msg_dma_address(vrp, msg);
+ sg_dma_len(sg) = len;
+}
+
/* for more info, see below documentation of rpmsg_create_ept() */
static struct rpmsg_endpoint *__rpmsg_create_ept(struct virtproc_info *vrp,
struct rpmsg_channel *rpdev, rpmsg_rx_cb_t cb,
@@ -754,12 +770,12 @@ int rpmsg_send_offchannel_raw(struct rpmsg_channel *rpdev, u32 src, u32 dst,
print_hex_dump(KERN_DEBUG, "rpmsg_virtio TX: ", DUMP_PREFIX_NONE, 16, 1,
msg, sizeof(*msg) + msg->len, true);
- sg_init_one(&sg, msg, sizeof(*msg) + len);
+ rpmsg_msg_sg_init(vrp, &sg, msg, sizeof(*msg) + len);
mutex_lock(&vrp->tx_lock);
/* add message to the remote processor's virtqueue */
- err = virtqueue_add_outbuf(vrp->svq, &sg, 1, msg, GFP_KERNEL);
+ err = dma_virtqueue_add_outbuf(vrp->svq, &sg, 1, msg, GFP_KERNEL);
if (err) {
/*
* need to reclaim the buffer here, otherwise it's lost
@@ -828,10 +844,10 @@ static int rpmsg_recv_single(struct virtproc_info *vrp, struct device *dev,
dev_warn(dev, "msg received with no recipient\n");
/* publish the real size of the buffer */
- sg_init_one(&sg, msg, RPMSG_BUF_SIZE);
+ rpmsg_msg_sg_init(vrp, &sg, msg, RPMSG_BUF_SIZE);
/* add the buffer back to the remote processor's virtqueue */
- err = virtqueue_add_inbuf(vrp->rvq, &sg, 1, msg, GFP_KERNEL);
+ err = dma_virtqueue_add_inbuf(vrp->rvq, &sg, 1, msg, GFP_KERNEL);
if (err < 0) {
dev_err(dev, "failed to add a virtqueue buffer: %d\n", err);
return err;
@@ -1007,9 +1023,9 @@ static int rpmsg_probe(struct virtio_device *vdev)
struct scatterlist sg;
void *cpu_addr = vrp->rbufs + i * RPMSG_BUF_SIZE;
- sg_init_one(&sg, cpu_addr, RPMSG_BUF_SIZE);
+ rpmsg_msg_sg_init(vrp, &sg, cpu_addr, RPMSG_BUF_SIZE);
- err = virtqueue_add_inbuf(vrp->rvq, &sg, 1, cpu_addr,
+ err = dma_virtqueue_add_inbuf(vrp->rvq, &sg, 1, cpu_addr,
GFP_KERNEL);
WARN_ON(err); /* sanity check; this can't really happen */
}
--
1.9.1
Edgar E. Iglesias
2015-05-07 00:28:09 UTC
Permalink
Post by Rusty Russell
First off, I have handed maintainership off to Michael S. Tsirkin, so
his word is now law.
That said... there's nothing fundamentally *wrong* with this, but it's
not how standard virtio works. We decided some time ago that as we're
paravirtualized, we would not be doing address mapping.
rpmsg uses virtio, but it's with a twist: they're not talking to a
1) Don't use non-kmalloc addresses.
2) If that's not possible, call these _dma interfaces _rpmsg instead,
so normal virtio users don't get confused and try to use them.
Thanks Rusty,

That was helpful, I'll see if I can do something in line with nr 2.

AFAICT, #1 will be hard. The remote-processor would have to be
cache-coherent and share memory address-space view with the master
CPU. This is not the common case for remoteproc (unlike when virtio
communication flows between host and guest on the same CPU or SMP system).
Ohad, do you have any thoughts on this?

Cheers,
Edgar
Post by Rusty Russell
Cheers,
Rusty.
Post by Edgar E. Iglesias
---
drivers/rpmsg/virtio_rpmsg_bus.c | 28 ++++++++++++++++++++++------
1 file changed, 22 insertions(+), 6 deletions(-)
diff --git a/drivers/rpmsg/virtio_rpmsg_bus.c b/drivers/rpmsg/virtio_rpmsg_bus.c
index 73354ee..9ae53a0 100644
--- a/drivers/rpmsg/virtio_rpmsg_bus.c
+++ b/drivers/rpmsg/virtio_rpmsg_bus.c
@@ -210,6 +210,22 @@ static void __ept_release(struct kref *kref)
kfree(ept);
}
+static inline dma_addr_t msg_dma_address(struct virtproc_info *vrp, void *msg)
+{
+ unsigned long offset = msg - vrp->rbufs;
+
+ return vrp->bufs_dma + offset;
+}
+
+static inline void rpmsg_msg_sg_init(struct virtproc_info *vrp,
+ struct scatterlist *sg,
+ void *msg, unsigned int len)
+{
+ sg_init_table(sg, 1);
+ sg_dma_address(sg) = msg_dma_address(vrp, msg);
+ sg_dma_len(sg) = len;
+}
+
/* for more info, see below documentation of rpmsg_create_ept() */
static struct rpmsg_endpoint *__rpmsg_create_ept(struct virtproc_info *vrp,
struct rpmsg_channel *rpdev, rpmsg_rx_cb_t cb,
@@ -754,12 +770,12 @@ int rpmsg_send_offchannel_raw(struct rpmsg_channel *rpdev, u32 src, u32 dst,
print_hex_dump(KERN_DEBUG, "rpmsg_virtio TX: ", DUMP_PREFIX_NONE, 16, 1,
msg, sizeof(*msg) + msg->len, true);
- sg_init_one(&sg, msg, sizeof(*msg) + len);
+ rpmsg_msg_sg_init(vrp, &sg, msg, sizeof(*msg) + len);
mutex_lock(&vrp->tx_lock);
/* add message to the remote processor's virtqueue */
- err = virtqueue_add_outbuf(vrp->svq, &sg, 1, msg, GFP_KERNEL);
+ err = dma_virtqueue_add_outbuf(vrp->svq, &sg, 1, msg, GFP_KERNEL);
if (err) {
/*
* need to reclaim the buffer here, otherwise it's lost
@@ -828,10 +844,10 @@ static int rpmsg_recv_single(struct virtproc_info *vrp, struct device *dev,
dev_warn(dev, "msg received with no recipient\n");
/* publish the real size of the buffer */
- sg_init_one(&sg, msg, RPMSG_BUF_SIZE);
+ rpmsg_msg_sg_init(vrp, &sg, msg, RPMSG_BUF_SIZE);
/* add the buffer back to the remote processor's virtqueue */
- err = virtqueue_add_inbuf(vrp->rvq, &sg, 1, msg, GFP_KERNEL);
+ err = dma_virtqueue_add_inbuf(vrp->rvq, &sg, 1, msg, GFP_KERNEL);
if (err < 0) {
dev_err(dev, "failed to add a virtqueue buffer: %d\n", err);
return err;
@@ -1007,9 +1023,9 @@ static int rpmsg_probe(struct virtio_device *vdev)
struct scatterlist sg;
void *cpu_addr = vrp->rbufs + i * RPMSG_BUF_SIZE;
- sg_init_one(&sg, cpu_addr, RPMSG_BUF_SIZE);
+ rpmsg_msg_sg_init(vrp, &sg, cpu_addr, RPMSG_BUF_SIZE);
- err = virtqueue_add_inbuf(vrp->rvq, &sg, 1, cpu_addr,
+ err = dma_virtqueue_add_inbuf(vrp->rvq, &sg, 1, cpu_addr,
GFP_KERNEL);
WARN_ON(err); /* sanity check; this can't really happen */
}
--
1.9.1
Ohad Ben-Cohen
2015-05-16 09:32:10 UTC
Permalink
On Thu, May 7, 2015 at 3:28 AM, Edgar E. Iglesias
Post by Edgar E. Iglesias
Post by Rusty Russell
First off, I have handed maintainership off to Michael S. Tsirkin, so
his word is now law.
That said... there's nothing fundamentally *wrong* with this, but it's
not how standard virtio works. We decided some time ago that as we're
paravirtualized, we would not be doing address mapping.
rpmsg uses virtio, but it's with a twist: they're not talking to a
1) Don't use non-kmalloc addresses.
2) If that's not possible, call these _dma interfaces _rpmsg instead,
so normal virtio users don't get confused and try to use them.
Thanks Rusty,
That was helpful, I'll see if I can do something in line with nr 2.
AFAICT, #1 will be hard. The remote-processor would have to be
cache-coherent and share memory address-space view with the master
CPU. This is not the common case for remoteproc (unlike when virtio
communication flows between host and guest on the same CPU or SMP system).
Ohad, do you have any thoughts on this?
rpmsg is allocating a large chunk (256KB) of physically-contiguous CMA
memory today, which is exposed via the dma_alloc_coherent API (and set
up in advance by platform-specific code), so if #2 above is
acceptable, it would be easier, yeah.

Thanks,
Ohad.
Michael S. Tsirkin
2015-06-23 05:17:44 UTC
Permalink
Post by Ohad Ben-Cohen
On Thu, May 7, 2015 at 3:28 AM, Edgar E. Iglesias
Post by Edgar E. Iglesias
Post by Rusty Russell
First off, I have handed maintainership off to Michael S. Tsirkin, so
his word is now law.
That said... there's nothing fundamentally *wrong* with this, but it's
not how standard virtio works. We decided some time ago that as we're
paravirtualized, we would not be doing address mapping.
rpmsg uses virtio, but it's with a twist: they're not talking to a
1) Don't use non-kmalloc addresses.
2) If that's not possible, call these _dma interfaces _rpmsg instead,
so normal virtio users don't get confused and try to use them.
Thanks Rusty,
That was helpful, I'll see if I can do something in line with nr 2.
AFAICT, #1 will be hard. The remote-processor would have to be
cache-coherent and share memory address-space view with the master
CPU. This is not the common case for remoteproc (unlike when virtio
communication flows between host and guest on the same CPU or SMP system).
Ohad, do you have any thoughts on this?
rpmsg is allocating a large chunk (256KB) of physically-contiguous CMA
memory today, which is exposed via the dma_alloc_coherent API (and set
up in advance by platform-specific code), so if #2 above is
acceptable, it would be easier, yeah.
Thanks,
Ohad.
I'm thinking same as Rusty: I'd prefer 1 but if not possible, I can live
with 2 above.
--
MST
Edgar E. Iglesias
2015-06-23 11:46:13 UTC
Permalink
Post by Michael S. Tsirkin
Post by Ohad Ben-Cohen
On Thu, May 7, 2015 at 3:28 AM, Edgar E. Iglesias
Post by Edgar E. Iglesias
Post by Rusty Russell
First off, I have handed maintainership off to Michael S. Tsirkin, so
his word is now law.
That said... there's nothing fundamentally *wrong* with this, but it's
not how standard virtio works. We decided some time ago that as we're
paravirtualized, we would not be doing address mapping.
rpmsg uses virtio, but it's with a twist: they're not talking to a
1) Don't use non-kmalloc addresses.
2) If that's not possible, call these _dma interfaces _rpmsg instead,
so normal virtio users don't get confused and try to use them.
Thanks Rusty,
That was helpful, I'll see if I can do something in line with nr 2.
AFAICT, #1 will be hard. The remote-processor would have to be
cache-coherent and share memory address-space view with the master
CPU. This is not the common case for remoteproc (unlike when virtio
communication flows between host and guest on the same CPU or SMP system).
Ohad, do you have any thoughts on this?
rpmsg is allocating a large chunk (256KB) of physically-contiguous CMA
memory today, which is exposed via the dma_alloc_coherent API (and set
up in advance by platform-specific code), so if #2 above is
acceptable, it would be easier, yeah.
Thanks,
Ohad.
I'm thinking same as Rusty: I'd prefer 1 but if not possible, I can live
with 2 above.
Thanks all for the feedback,

I'm currently travelling and won't be doing much coding until
august/september. We are using something along the lines of nr 2 at
Xilinx and it's working fine. I'll post something like that in
september if no one beats me to it.

Best regards,
Edgar

Loading...