From: KJ on
For some reason when reading the original post I was reading that what
was needed was independent control of both address and data over the
multiple ports implying a certain number of memory bits accessible from
port A and another (different) number from port B.

>From the two responses it would appear that all we're talking about is
two independent data bus sizes. Address size of the various ports is a
calculated value determined from the data bus size and memory size.

Ben Jones wrote:
> I'm not sure why none of the synthesis tools support this, but it's true,
> they don't. I've always ended up instantiating something to get this
> behaviour. :-(
>
Not sure what support you think you're not getting. Memory can be
inferred from plain vanilla VHDL with synthesis tools. Data bus sizing
(and the implied address bus sizing) is a wrapper around that basic
memory structure and gets synthesized just fine...so it is supported.

If what you mean by 'not supported' is that there isn't a pre-defined
instance that you can plop down into your code and parameterize, then
you're going into the Mr. Wizard territory which leads you to vendor
specific implementations. Avoiding the vendor specific stuff is
usually a better approach in most cases. To have vendor independent
useful modules like this, these modules should be standardized. This
is exactly the type of thing that LPM attempted to do. LPM languishes
as a standard though because it didn't get updated to include new and
useful modules. Presumably this is because the FPGA vendors would
rather go the Mr. Wizard path and try to lock designers in to their
parts for irrational reasons rather than enhance standards like LPM so
that designers can remain vendor neutral at design time and let the
parts selection be based on rational reasons like cost, function and
performance.

KJ

From: Ben Jones on

"KJ" <Kevin.Jennings(a)Unisys.com> wrote in message
news:1162303126.746950.165570(a)i42g2000cwa.googlegroups.com...
> From the two responses it would appear that all we're talking about is
> two independent data bus sizes. Address size of the various ports is a
> calculated value determined from the data bus size and memory size.

Yup, that's what I would assume (since nothing else makes sense :-))

>> I'm not sure why none of the synthesis tools support this, but it's true,
>> they don't. I've always ended up instantiating something to get this
>> behaviour. :-(

> Not sure what support you think you're not getting. Memory can be
> inferred from plain vanilla VHDL with synthesis tools. Data bus sizing
> (and the implied address bus sizing) is a wrapper around that basic
> memory structure and gets synthesized just fine...so it is supported.

The *functionality* is supported, but the optimal mapping to the technology
is not. Or wasn't last time I looked. If I write that plain vanilla VHDL, I
have never seen a synthesis tool create an asymetrically-ported RAM from it;
I always got a RAM with a multiplexor on the output (or worse, a bunch of
DFFs).

> If what you mean by 'not supported' is that there isn't a pre-defined
> instance that you can plop down into your code and parameterize, then
> you're going into the Mr. Wizard territory which leads you to vendor
> specific implementations. Avoiding the vendor specific stuff is
> usually a better approach in most cases.

In many cases, I would agree, so long as you don't end up crippling your
design's performance as a result, and spending money on silicon features
that you're not going to use. After all, they were put there to help you
make your design as efficient as possible (which managers usually like).

Certainly making sure that vendor-specific functions are isolated in the
code so they can be swapped out at will is a sensible practice. As is making
a careful risk assessment whenever you consider using a feature that only
one vendor or device family supports.

> Presumably this is because the FPGA vendors would
> rather go the Mr. Wizard path and try to lock designers in to their
> parts for irrational reasons

With all due respect, I think you presume too much. There are many problems
with wizards and core generators for things like RAMs and arithmetic
elements - mostly, they are the wrong level of abstraction for most designs.
Nevertheless, IP cores from FPGA vendors serve two major purposes. Firstly,
they help designers get the most out of the silicon in those cases where
synthesis tools are not sophisticated enough to produce optimal results.
Secondly, they allow designers to buy large blocks of standards-compliant
IP - such as error correction cores, DDR controllers, and what have you -
instead of designing them in-house.

I'm not denying that there is a risk of vendor lock-in, but I'd dispute that
it's the motivating factor for vendors to develop IP. Certainly when members
of the IP development team that I belong to here at Xilinx sit down with the
marketing department and discuss roadmaps, the questions that come up are
always "What are customers asking for? What is good/bad about our current
product? What new features do we need?", not "How can we ensnare more
hapless design engineers today?". :-)

Cheers,

-Ben-


From: KJ on

Ben Jones wrote:

> > Presumably this is because the FPGA vendors would
> > rather go the Mr. Wizard path and try to lock designers in to their
> > parts for irrational reasons
>
> With all due respect, I think you presume too much.
Perhaps I do, since I don't work for the FPGA vendors I can only
speculate or presume.

> There are many problems
> with wizards and core generators for things like RAMs and arithmetic
> elements - mostly, they are the wrong level of abstraction for most designs.
Maybe. I find there lack of a standard on the 'internal' side to be
the bigger issue.

> Nevertheless, IP cores from FPGA vendors serve two major purposes. Firstly,
> they help designers get the most out of the silicon in those cases where
> synthesis tools are not sophisticated enough to produce optimal results.
I agree, they are good at that. I don't believe that a unique entity
is required in order to produce the optimal silicon. Once the
synthesis hits a standardized entity name it would know to stop and
pick up the targetted device's implementation.

> Secondly, they allow designers to buy large blocks of standards-compliant
> IP - such as error correction cores, DDR controllers, and what have you -
> instead of designing them in-house.
And just exactly which standard interfaces are we talking about? DDRs
have a JEDEC standard but the 'user' side of that DDR controller
doesn't have a standard interface. So while you take advantage of the
IC guy's standards to perform physical interfaces, you don't apply any
muscle to standardizing an internal interface. The ASIC guys have
their standard, Wishbone is an open specification, Altera has theirs,
Xilinx has theirs.....all the vendors have their own 'standard'.

Tell me what prevents everyone from standardizing on an interface to
their components in a manner similar to what LPM attempts to do? The
chip guys do it for their parts, the FPGA vendors don't seem to want to
do anything similar on the IP core side. This doesn't prevent each
company from implementing the function in the best possible way, it
simply defines a standardized interface to basically identical
functionality (i.e. it turns read and write requests into DDR signal
twiddling in the case of a DDR controller).

Can you list any 'standard' function IP where the code can be portable
and in fact is portable across FPGA vendors without touching the code?
Compression? Image processing? Color space converters? Memory
interfaces? Anything? All the vendors have things in each of those
categories and each has their own unique interface to that thing.

>
> I'm not denying that there is a risk of vendor lock-in, but I'd dispute that
> it's the motivating factor for vendors to develop IP.
I was only suggesting that it was an incentive...which you seem to
agree with.

KJ

From: Peter Alfke on
The user community "pressures" the FPGA (and other IC) evndors to come
up with better and cheaper solutions. That's called progress. We love
it!

We respond with new and improved chip families. We get some help from
IC processing technology, i.e. "Moore's Law", especially in the form of
cost reduction, and a little bit of speed improvement (less with any
generation). We also have to fight negative effects, notably higher
leakage currents.

Real progress comes from better integration of popular functions.
That's why we now include "hard-coded" FIFO and ECC controllers in the
BlockRAM, Ethernet and PCIe controllers, multi-gigabit transceivers,
and microprocessors. Clock control with DCMs and PLLs, as well as
configurable 75-ps incremental I/O delays are lower-level examples.
These features increase the value of our FPGAs, but they definitely are
not generic.

If a user wants to treat our FPGAs in a generic way, so that the design
can painlessly be migrated to our competitor, all these powerful,
cost-saving and performance-enhancing features (from either X or A)
must be avoided. That negates 80% of any progress from generation to
generation. Most users might not want to pay that price.

And remember, standards are nice and necessary for interfacing between
chips, but they always lag the "cutting edge" by several years. Have
you ever attended the bickering at a standards meeting?...

Cutting edge FPGAs will become ever less generic.
That's a fact of life, and it helps you build better and less costly
systems.
Peter Alfke
===========

From: KJ on

Peter Alfke wrote:
> Real progress comes from better integration of popular functions.
> That's why we now include "hard-coded" FIFO and ECC controllers in the
> BlockRAM, Ethernet and PCIe controllers, multi-gigabit transceivers,
> and microprocessors.
None of that is precluded, I'm just saying that I haven't heard why it
could not be accomplished within a standard framework. Why would the
entity (i.e. the interface) for brand X's FIFO with ECC, Ethernet,
blah, blah, blah, not use a standard user side interface in addition to
the external standards? Besides facilitating movement (which is not
the only concern) it promotes ease of use in the first place.

> Clock control with DCMs and PLLs, as well as
> configurable 75-ps incremental I/O delays are lower-level examples.
I agree, those are good examples of some of the easiest things that
could have a standardized interface....although I don't think you
really agree with my reading of what you wrote ;)

> These features increase the value of our FPGAs, but they definitely are
> not generic.
I said standardized not 'generic'. I was discussing the interface to
that nifty wiz bang item and saying that the interface could be
standardized, the implementation is free to take as much advantage of
the part as it wishes.

>
> If a user wants to treat our FPGAs in a generic way, so that the design
> can painlessly be migrated to our competitor, all these powerful,
> cost-saving and performance-enhancing features (from either X or A)
> must be avoided. That negates 80% of any progress from generation to
> generation. Most users might not want to pay that price.
My point was to agree on a standard interface for given functionality
not some dumbed down generic vanilla implementation of that function.

To take an example, and using your numbers, are you suggesting that the
performance of a Xilinx DDR controller implemented using the Wishbone
interface would be 80% slower than the functionally identical DDR
controller that Xilinx has? If so, why is that? If not then what
point were you trying to make?

>
> And remember, standards are nice and necessary for interfacing between
> chips, but they always lag the "cutting edge" by several years.
I don't think any of the FPGA vendors target only the 'cutting edge'
designs. I'm pretty sure that most of their revenue and profit comes
from designs that are not 'cutting edge' so that would give you those
'several years' to get the standardized IP in place.

> Have
> you ever attended the bickering at a standards meeting?...
>
Stop bickering so much. The IC guys cooperate and march to the
drumbeat of the IC roadmap whether they think it is possible or not at
that time (but also recognizing what the technology hurdles to get
there are). There is precedent for cooperation in the industry.

> Cutting edge FPGAs will become ever less generic.
Again, my point was standardization of the entity of the IP, not
whether it is 'generic'.

> That's a fact of life, and it helps you build better and less costly
> systems.
But not supported by anything you've said here. Again, my point was
for a given function, why can't the interface to that component be
standardized? Provide an example to bolster your point (as I've
suggested with the earlier comments regarding the Wishbone/Xilinx DDR
controller example).

KJ

KJ