cpu - How do x86 instructions to read/write data from memory interact with the L1 and L2 caches? -


let's have instruction in x86 read data address in memory

mov eax, word_123456 

presumably fetch data memory. let's store

mov word_123456, eax 

i know cpu architecture diagrams there caches in between random access memory , cpu. if ask store contents of register in memory, go l1 cache first? decides cache ends in? also, i'm curious if can write/hint x86 commands specify whether move operation should stored in cache or going rare read/write, etc.

by default, go both l1 , l2 caches. (i'm simplifying wrt atomic accesses, if you're doing mov, that's deal.) it's not goes l1 cache "first", once you've read register, cache line also cached later.

(i'm getting little architecture-specific here. architectures choose make 2 caches exclusive, such l2 cache line removed l2 cache put l1 cache. doesn't have huge effect on code performance, because l2 cache larger l1 cache. it's more bookkeeping thing.)

the purpose of l2 cache bigger l1 cache, such if in l1 cache has since been evicted, it's still in l2 cache , doesn't require going way ram.

and yes, can hint writes bypass cache. purpose of, instance, movnti. don't bother manually using movnti write-only accesses, though. practical performance benefit small, , if current function isn't reading memory, there's decent chance other soon-to-be-executed code will.


Comments

Popular posts from this blog

commonjs - How to write a typescript definition file for a node module that exports a function? -

openid - Okta: Failed to get authorization code through API call -

thorough guide for profiling racket code -