I just had a closer look at the new interval system of the 18.2 release. I can not see the reason, why CH_CFG_TIME_TYPES_SIZE is limited to 16 and 32 bit size, though. Of course, there is the risk of overflows in the conversion functions so that 'time_conv_t' has always double the size than the other types, but is is implemented in a very pessimistic way. I would like to see the possibility to set CH_CFG_TIME_TYPES_SIZE to 64, as there is plenty of headroom (depending on other settings) before overflows may occur.
The worst case appears to be in the US2I conversion functions:
Code: Select all
#define TIME_US2I(usecs) \
((sysinterval_t)((((time_conv_t)(usecs) * \
(time_conv_t)CH_CFG_ST_FREQUENCY) + \
(time_conv_t)999999) / (time_conv_t)1000000))
The critical section here is
Code: Select all
(((time_conv_t)(usecs) * (time_conv_t)CH_CFG_ST_FREQUENCY) + (time_conv_t)999999)
as the result of this calculation must fit into 'time_conv_t'.
Consequently, the maximum value of 'usecs' can be calculated as
Code: Select all
usecs <= ((time_conv_t)-1 - 999999) / (time_conv_t)CH_CFG_ST_FREQUENCY
Now let's assume 64 bit width of 'time_conv_t' and a very high setting of CH_CFG_ST_FREQUENCY:
Code: Select all
chconf.h
// 1us resolution
#define CH_CFG_ST_FREQUENCY 1000000
This results in
Code: Select all
usecs <= ((time_conv_t)-1 - 999999) / (time_conv_t)CH_CFG_ST_FREQUENCY
= (2^64-1 - 999999) / 1000000
= 1.84E13
As you can see the result is much bigger than 2^32 (4.29E9) and would be even higher for smaller CH_CFG_ST_FREQUENCY.
I would propose a solution where CH_CFG_TIME_TYPES_SIZE can be set to 64, but the secure time conversion functions check the argument for this maximum value, i.e.
Code: Select all
chDbgAssert(usecs <= ((time_conv_t)-1 - (time_conv_t)999999) / (time_conv_t)CH_CFG_ST_FREQUENCY);
Best regards,
Thomas