Quantcast
Channel: SystemC Language Forum RSS Feed
Viewing all articles
Browse latest Browse all 595

Modeling packets age using sc_time_stamp()

$
0
0

The time at which a packet is generated is stored in the header of the packet. The header allocates only 32 bits for this timestamp.

For any variable of type sc_time T_start, T_start.to_default_time_unit() will return the number of clock cycles if default time unit is configured to (CLK_PERIOD, SC_NS).

 

The question here is how to convert sc_time values to 32 bits unsigned integer without large loss of precision.

 

At the producer side this piece of code runs each time is generated:

sc_time t_start = sc_time_stamp(); 
double magnitude = t_start.to_default_time_unit(); // number of clock cycles since the start of the simulation until this time (is this correct?)
unsigned int t_start_u_32 = (unsigned int) std::round(magnitude); // possible loss of precision if magnitude is too large to be presented in 32 bits 
// store t_start_u_32 in the correspondent packet header field. 

At the consumer side (an arbiter that selects packets depending on their age)

double t_start_ret = t_start_u_32 * CLK_PERIOD;
sc_time t_start(t_start_ret, SC_NS); 
sc_time packet_age = sc_time_stamp() - t_start; 

Is there any good way to manipulate sc_time variables, default time unit, round/cast operations without loosing precision under the constraint of 32 bits?


Viewing all articles
Browse latest Browse all 595

Latest Images

Trending Articles



Latest Images