System Architecture 系统架构 ===================

Thanks for reading what of the tuturial has been completed so far. It probably won’t ever be done. Anyway, I’d like to discuss the system architecture quickly. 感谢您阅读到目前为止tuturial已经完成。它可能 永远也做不完。无论如何,我想快速讨论一下系统架构。

First, the OpenMV Cam is based off of the STM32 Microcontroller architecture because the MicroPython pyboard is based off of the STM32 Microcontroller architecture. If the project was started using some other system things might have been totally different. 首先,OpenMV Cam基于STM32微控制器体系结构,因为 MicroPython pyboard是基于STM32微控制器架构。如果 项目开始时使用的系统可能完全不同。

Moving on, we choose not to use DRAM with the original OpenMV Cam because it made the system too expensive to produce at low volumes. SDR DRAM (which is what the STM32 supports) isn’t the cheapest at low manufacturing volumes and greatly increases board design complexity (e.g. you need to do 8-layer boards to route all the signals). As we’ve revved the OpenMV Cam with a faster and faster main processor the SDR DRAM speed also has not kept up with the internal RAM speed. On the STM32H7 for example the internal RAM bandwidth is 3.2GB/s versus a maximum SDR RAM bandwidth of 666MB/s even if we built the system with an 8-layer board using a 32-bit DRAM bus requring 50+ I/O pins for the DRAM. 继续,我们选择不使用DRAM与原始的OpenMV Cam,因为它使 系统成本太高,无法大批量生产。SDR DRAM (STM32支持) 并不是最便宜的低制造量和大大增加板设计 复杂性(例如,你需要做8层板路由所有的信号)。就像我们 用一个越来越快的主处理器——SDR DRAM速度加速了OpenMV Cam 内部RAM速度也跟不上。例如在STM32H7上 内部RAM带宽为3.2GB/s,而最大SDR RAM带宽为666MB/s 即使我们用一个使用32位DRAM总线的8层板构建系统 为DRAM需要50+ I/O引脚。

So, since we’re built on the STM32 architecture and limited to using expensive and slow SDR DRAM for now we haven’t added it as our internal SRAM is way faster. As production volumes go up and technology improves hopefully we’ll be able to have more memory while still keeping the OpenMV Cam simple to use. 因此,由于我们是在STM32架构上构建的,并且限制使用昂贵的 慢速SDR DRAM目前我们还没有添加它,因为我们的内部SRAM要快得多。 随着产量的增加和技术的进步,我们有希望做到这一点 有更多的内存,同时仍然保持OpenMV Cam简单使用。

Memory Architecture 内存架构 ——————-

Given the above memory architecture limitations we built all of our code to run inside of the STM32 microcontroller memory. However, the STM32 doesn’t have one large contigous memory map. It features different segments of RAM for different situations. 考虑到上述内存架构的限制,我们构建了所有要运行的代码 内部的STM32微控制器存储器。但是,STM32没有 大的连续内存映射。它为不同的用户提供不同的内存段 的情况。

First, there’s a segment of RAM which contains global variables, the heap, and the stack. The heap and global variables are fixed in size so only the stack grows and shrinks. For performance reasons heap/stack collision is not checked constantly so don’t use recursive functions on the OpenMV Cam. 首先,有一个包含全局变量、堆和的RAM段 堆栈。堆和全局变量的大小是固定的,所以只有堆栈 增长和收缩。出于性能原因,没有检查堆/堆栈冲突 所以不要在OpenMV Cam上使用递归函数。

As for the heap, it’s fixed in size versus growing towards the stack and managed by MicroPython’s garbage collector. MicroPython automagically free’s up unused blocks inside of the heap. However, the design of the MicroPython heap does not allow it to be arbitrarily large (e.g. in the megabyte range) like heaps on PCs. So, even if we have DRAM it would be hard to leverage using MicroPython’s heap. 至于堆,它的大小是固定的,而不是向堆栈增长和管理 通过MicroPython的垃圾收集器。微python自动自由是无用的 堆内的块。然而,MicroPython堆的设计却不是这样 允许它像pc上的堆一样任意大(例如在兆字节范围内)。 因此,即使我们有DRAM,也很难利用MicroPython的堆。

Next, there’s a larger memory segment for the frame buffer to store images in. On the bottom of the frame buffer new images are stored when functions like sensor.snapshot() are called. Any unused space in the frame buffer is then available to be used as a “frame buffer stack” that builds from the top of the frame buffer down. This memory architecture design is what allows a lot of our computer vision methods to execute without having to allocate large data structures inside of the MicroPython heap. 接下来,有一个更大的内存段供帧缓冲区存储图像。 在底部的帧缓冲新图像是存储时的功能 被称为“sensor.snapshot ()”。然后帧缓冲区中任何未使用的空间 控件的顶部构建的“帧缓冲堆栈”可用 帧缓冲。这种内存架构设计允许我们很多 计算机视觉方法,执行无需分配大数据 在MicroPython堆内部的结构。

That said, the frame buffer stack is still a stack and doesn’t support random allocations and deallocations. Luckily, most computer vision algorithms have very predictable memory allocations. For ones that don’t (like AprilTags) we allocate a temporary heap inside of the frame buffer stack when we need it (again to avoid fragmenting the MicroPython heap). 也就是说,帧缓冲区堆栈仍然是一个堆栈,不支持随机 分配和回收。幸运的是,大多数计算机视觉算法都非常 可预测的内存分配。对于那些不需要分配的标签(比如AprilTags),我们进行分配 一个临时堆内的帧缓冲堆栈,当我们需要它(再次避免 分解微python堆)。

Finally, vision algorithms return their results (which are small usually) by allocating objects in the MicroPython heap. The results can then be garbage collected easily by MicroPython while the frame buffer stack is fully cleared after any computer vision algorithms finish executing. 最后,视觉算法返回他们的结果(通常很小) 在MicroPython堆中分配对象。结果可能是垃圾 由MicroPython轻松收集,而帧缓冲区堆栈是完全清除 在任何计算机视觉算法完成执行后。

Now while this works great it means you can only have one big image in the frame buffer in RAM. As the MicroPython heap is optimized for small objects storing large 100KB images in it doesn’t make sense. To enable more images to fit in RAM we allow the frame buffer stack to be used for secondary image storage using sensor.alloc_extra_fb(). By allocating a secondary frame buffer on the frame buffer stack you can now have two or more images in RAM at the cost of reducing memory space for more complex algorithms (like AprilTags). 虽然这个功能很好,但这意味着你只能有一张大图片 缓冲区在RAM中。因为MicroPython堆是为存储小对象而优化的 其中的大型100KB图像没有意义。以便在RAM中容纳更多的图像 我们允许帧缓冲堆栈用于辅助图像存储使用 “sensor.alloc_extra_fb ()”。方法上的二级帧缓冲区 帧缓冲堆栈您现在可以有两个或更多的图像在RAM的代价 为更复杂的算法(如AprilTags)减少内存空间。

So, that’s the memory architecture. And… we allow images to be stored in the frame buffer, heap, and the frame buffer stack. Yes, our code is rather complex to handle all of this and it would have been great to just throw everything in a large DRAM. But, now you know why this isn’t the case. 这就是记忆结构。和…我们允许图像存储在 帧缓冲区、堆和帧缓冲区堆栈。是的,我们的代码是 处理这一切很复杂,扔了就好了 每样东西都有一个大型DRAM。但是,现在你知道为什么不是这样了。