Great post thanks! Do we know how much performance would be lost by "turning off" the LVP and LAP? I would naively guess these LVP and LAP would not be very helpful in deep neural net training or inference then, is this wrong?
I think it really depends on the workload. Ultimately, though, the LAP and the LVP are CPU performance optimization features which aren't present on GPUs (or AI chips) due to GPUs' relative lack of complex branching logic -- so this wouldn't affect neural network performance when you're running the network on a GPU.
Great post thanks! Do we know how much performance would be lost by "turning off" the LVP and LAP? I would naively guess these LVP and LAP would not be very helpful in deep neural net training or inference then, is this wrong?
I think it really depends on the workload. Ultimately, though, the LAP and the LVP are CPU performance optimization features which aren't present on GPUs (or AI chips) due to GPUs' relative lack of complex branching logic -- so this wouldn't affect neural network performance when you're running the network on a GPU.