I am not following your discussion of missed samples. In stream mode, all data is collected at the specified interval using a clock in hardware. You will get all that data, or you will get an error. You will not have any skipped or missed scans.
Are you asking how long it takes AIStreamRead to execute in your loop? That is, when you call AIStreamRead, how long does it take to return with your data? It generally depends on fact that AIStreamRead is waiting for the data to be collected, but assuming the data is already collected and waiting in the U12's buffer, a rule-of-thumb for the time in milliseconds is 20 + 0.4*NumSamples. So if you are reading 1200 samples you can estimate it will take about 500ms to execute that read and move that many samples from the U12 to software.
Note that "simple ai stream example.vi" has a delay frame in the read loop so you can simulate processing time. In the example set ScanRate=300 scans/second, NumberOfScans=300, and NumberOfChannels=4. That means your sample rate is 300*4 = 1200 samples/second. Run the example and you should see that it runs continuously and the time per iteration is about 1000ms. You should see that ScanBacklog is not growing, which tells you that you are keeping up and the U12's buffer is not growing.
Now start increasing "milliseconds to delay before reading". On my machine if I set it to 700ms, ScanBacklog would start to grown. We are scanning at 300 scans/second and reading 300 scans/read, so know that each loop iteration will take about 1000ms. With 700ms of delay in the loop, there is only 300ms left for the read function, but the read function takes longer than 300ms so we are not looping fast enough and the ScanBacklog starts to grow. If I set the delay back to 600ms the loop is able to catch-up and ScanBacklog drops to 0.