AI深度学习编译器工程师需要哪些技术栈?
澶ф涓ゅ勾鍗婂墠鍏ヤ簡杩欎釜鍧戯紝灏变竴鐩村湪鎬濊冭繖涓棶棰樸
娣卞害瀛︿範缂栬瘧鍣ㄩ渶瑕佷簡瑙g殑鐭ヨ瘑闈㈠ぇ鐨勬亹鎬栵紝鐞嗘兂鐨勬儏鍐垫槸锛
- 鍍忕畻娉曞伐绋嬪笀涓鏍蜂簡瑙e悇涓ā鍨嬬殑缁撴瀯
- 鍍忔繁搴﹀涔犳鏋跺伐绋嬪笀涓鏍蜂簡瑙e悇涓姛鑳芥ā鍧楃殑瀹炵幇
- 鍍忕紪璇戝櫒宸ョ▼甯堜竴鏍蜂簡瑙g紪璇戝師鐞
- 鍍忛珮鎬ц兘宸ョ▼甯堜竴鏍蜂簡瑙g畻瀛愮紪鍐欏強浼樺寲鏂规硶
- 鍍忕‖浠跺伐绋嬪笀涓鏍蜂簡瑙d綋绯荤粨鏋
- 杩樻湁鍩虹鐨勭紪绋嬭瑷锛屾暟鎹粨鏋勭畻娉曪紝鎿嶄綔绯荤粺鍘熺悊....
杩欐樉鐒朵笉鐜板疄锛屼綘涓嶅彲鑳藉湪姣忎釜瀛愰鍩熼兘鍜屼笓娉ㄨ繖涓瓙棰嗗煙鐨勪汉鎳傚緱涓鏍峰锛岃繖涔熸槸鍥版壈鎴戞尯闀夸竴娈垫椂闂寸殑闂锛屽鐨勫啀澶氾紝浼间箮涔熷彧鏄钵娴蜂竴绮燂紝杩欎釜娓呭崟瑕佷粩缁嗗湴鍒楀嚭鏉ワ紝杩滄瘮妤间笂鐨勯珮璧炲洖绛旈暱寰楀銆
浜庢槸鎴戝彧鑳介涓姝ユ兂锛屾妧鏈爤鐨勭洰鐨勬槸涓轰簡瑙e喅闂锛岄偅涔堥棶棰樺氨鍙互鍙樻垚锛屽浣曞垎閰嶆湁闄愮殑鏃堕棿锛屾渶澶у寲绉疮鐨勮В鍐抽棶棰樿兘鍔涖傛壘鍒版渶鏈夊涔犱环鍊肩殑鍐呭锛屼篃璁告瘮瀛︿範鏈韩鏇撮噸瑕併傜煡閬撹繖涔堟槸鏆傛椂涓嶉渶瑕佷簡瑙g殑锛屾瘮鐭ラ亾浠涔堟槸闇瑕佷簡瑙g殑鏇村疂璐点
鍏蜂綋鏉ヨ锛屾垜鎶婃垜鐨勬柟娉曞ぇ鑷存荤粨涓衡滄姄浣忛噸鐐癸紝鏋勫缓妗嗘灦锛岀伒娲绘繁鍏モ濄傛姄浣忛噸鐐癸紝鎸囩殑鏄瘡涓鍩熼兘鏈変竴浜涘叆闂ㄧ殑锛屽繀椤绘帉鎻$殑锛屼笉鎳傚氨瀹屽叏鐪嬩笉浜嗗叾浠栫殑鐭ヨ瘑銆傛瀯寤烘鏋讹紝鎸囩殑鏄湪浜嗚В浜嗗熀纭涔嬩笂锛屽鐭ヨ瘑鏈夋鏋舵х殑璁よ瘑锛岀煡閬撹繖涓鍩熻瑙e喅鍝簺闂锛屽摢浜涢棶棰樺ぇ鑷存湁鍝簺鏂规硶锛屽摢浜涙柟娉曞彲浠ュ幓鍝噷鐪嬪埌銆傜伒娲绘繁鍏ワ紝浠ュ墠涓ょ偣浣滀负鍩虹锛屼篃鏄渶鍚庤杈惧埌鐨勬晥鏋滐紝鏃㈢劧鎴戜滑闅句互鍋氬埌鐩存帴鍏呭垎鎺屾彙鍚勪釜棰嗗煙鐨勬妧鏈粏鑺傦紝閭f垜浠氨搴旇杩芥眰锛屽綋宸ヤ綔涓垎閰嶅埌鏌愪釜浠诲姟鐨勬椂鍊欙紝鍩轰簬浠诲姟鐨勯渶姹傦紝蹇熷湴鑾峰緱杩欓」浠诲姟闇瑕佺殑鐭ヨ瘑銆
鍏堟寲涓潙锛屽洖澶村啀鍐欏悇涓鍩熷叿浣撳摢浜涙槸鍏ラ棬閲嶇偣锛屾鏋跺浣曪紝蹇熸繁鍏ョ殑鏂规硶鏄粈涔堛
杩欓噷鎻掓挱涓涓0鏄庯紝鎴戣鐨勮繖浜涗富瑕佹槸閽堝鐪熸鐨勫叿浣撳伐浣滈渶瑕佺Н绱摢浜涚煡璇嗗拰鑳藉姏銆傝嚦浜庨潰璇曚細闂暐锛岃繕鏄鎼滄悳闈㈢粡鍚э紝姣曠珶鍥藉唴鎶鏈潰璇曡劚绂诲疄璺佃繖涓棶棰樹篃涓嶆槸涓涓ゅぉ浜嗐
鍏堝~涓涓嬬畻娉曞眰鐨勫潙鍚с傜畻娉曞伐绋嬪笀瑕佽В鍐崇殑闂鏄紝绔埌绔湴瑙e喅鍏蜂綋鐨勯棶棰橈紝鍏蜂綋鏉ヨ灏辨槸浠庣敤鎴风粰鍑虹殑鍘熷鏁版嵁鍒扮敤鎴烽渶瑕佹嬁鍒扮殑鏈缁堟暟鎹紝姣斿濡傛灉鏄竴涓鍥惧儚鐨勫垎鍓蹭换鍔★紝閭e氨鏄粠鍘熷鐨勫浘鐗囨暟鎹泦鍒版渶鍚庢爣鍙峰垎鍓叉鐨勮緭鍑哄浘鐗囥傝屽浜庢繁搴﹀涔犵紪璇戝櫒鏉ヨ锛岃瑙e喅鐨勯棶棰樻槸瀵逛簬涓涓凡鐭ョ粨鏋勭殑妯″瀷锛屾壘鍒版渶浼樼殑璁$畻瀹炵幇锛岃繖涓摼璺紝鏄瘮绠楁硶宸ョ▼甯堣鐭緢澶氱殑銆傛洿杩涗竴姝ユ潵璇达紝娣卞害瀛︿範缂栬瘧鍣ㄥ彧闇瑕佺悊瑙d竴涓ā鍨嬮暱浠涔堟牱銆傝嚦浜庢ā鍨嬫湁浠涔堢敤锛屾ā鍨嬩负浠涔堥暱杩欐牱锛屾庝箞璁粌鍑鸿繖涓ā鍨嬶紝鎬庝箞鑾峰緱杩欎釜妯″瀷鎯宠鐨勮緭鍏ワ紝骞朵笉閲嶈銆
涓句緥鏉ヨ锛岀畻娉曞伐绋嬪笀瑕佺煡閬撴庝箞澶勭悊璁粌鏁版嵁涓嶅钩琛$殑闂锛岃繖璺熸繁搴﹀涔犵紪璇戝櫒灏辨病浠涔堝叧绯汇傚張姣斿NLP閲岄潰锛岃緭鍏ユ暟鎹鎬庝箞鍋歟mbedding锛屼篃涓嶉噸瑕侊紝鍙堟瘮濡傦紝bert鍦ㄦ渶鍚庝竴灞傚姞涓婁笉鍚岀殑灏惧鐞嗭紝灏卞彲浠ユ嬁鏉ヨВ鍐充笉鍚岀殑闂锛岃繖涔熶笉閲嶈銆
閭g悊瑙d竴涓ā鍨嬮渶瑕侀暱浠涔堟牱锛屽叿浣撻渶瑕佹噦鍟ュ憿锛熸渶鍩烘湰鐨勫綋鐒舵槸鍚勪釜绠楀瓙鐨勮涔夛紝杩欎釜娌′粈涔堝ソ璇寸殑锛屽簲璇ラ兘鑳芥帉鎻★紝瀵硅嚜宸辫姹傞珮涓鐐圭殑璇濓紝鍙互瀵圭潃ONNX鏂囨。涓涓釜鐪嬭繃鍘汇傝嚦浜庢ā鍨嬪眰闈紝妯″瀷鐨勬暟閲忔槸娴╁鐑熸捣鐨勶紝浣嗘槸甯歌鐨勬ā鍧楁潵鏉ュ洖鍥炲氨鏄偅浜涳紝娣卞害瀛︿範缂栬瘧鍣ㄧ殑寰堝浼樺寲閮芥槸鍩轰簬杩欎簺妯″潡鏉ョ殑銆傛墍浠ユ垜浼氬缓璁ぇ瀹舵妸閲嶇偣鏀惧湪绉疮鐞嗚В甯歌鐨勬ā鍨嬫ā鍧楋紝姣斿conv-bias-batchNorm,姣斿transformer鐨別ncoder/decoder blocker銆傚浜庢暣涓ā鍨嬫潵璇达紝涓嶉渶瑕佺湅澶浜嗭紝鐪嬩竴涓嬫渶缁忓吀鐨勫嚑涓綉缁滐紝涔熷氨鏄疢LPerf璺戠殑閭e嚑涓氨濂戒簡锛宺esnet,yolov3,bert銆
鍦ㄥ伐浣滅殑鏃跺欙紝浣犲畬鍏ㄥ彲鑳戒細閬囧埌涓涓浉瀵归檶鐢熺殑妯″瀷锛屾墍浠ヤ綘闇瑕佸叿澶囧揩閫熸妸鎻′竴涓ā鍨嬬殑鑳藉姏锛岃繖閲屾帹鑽愬ぇ瀹剁Н绱殑鍏蜂綋鑳藉姏鏈変袱涓備竴涓槸閫氳繃netron闃呰妯″瀷鐨刼nnx鑳藉姏銆傚湪浣犳帉鎻′笂涓娈佃鐨勭悊璁哄唴瀹逛箣鍚庯紝浣犲彲浠ヤ笅杞戒竴涓ā鍨嬬殑onnx鏂囦欢璇曠潃鎵撳紑鐪嬬湅锛屽垰寮濮嬩篃璁歌繕鏄洰鐥涜嫤鐨勶紝鐪嬭繃鐨勫簲璇ラ兘鎳傦紝浣嗕綘搴旇绉疮鍑虹殑鑳藉姏鏄紝鎵撳紑涓涓檶鐢熸ā鍨嬬殑onnx鏂囦欢锛屽緢蹇兘澶熺湅鍑鸿繖涓ā鍨嬪ぇ鑷村寘鎷摢浜涘熀鏈粨鏋勬ā鍧楋紝杩涜岀悊瑙d紭鍖栬鐐规湁鍝簺銆傜浜屼釜鑳藉姏鏄槄璇绘ā鍨嬩唬鐮佽兘鍔涳紝姣斿NLP鏂归潰鐨凥uggingFace, CV鏂归潰鐨凮penMMlab锛屾湁浜涙椂鍊欎綘闇瑕佷簡瑙d竴浜涙ā鍨嬬粏鑺傦紝鐩存帴杩涘埌浠g爜閲岄潰鐪嬫槸鏇存柟渚跨殑銆
浠婂ぉ鏉ュ~濉綋绯荤粨鏋勭殑鍧戯紝鐩墠杩樹笉鐔熸倝NV鐢熸佷箣澶栫殑纭欢锛屽氨浠V浣滀负浠h〃鏉ヨ浜嗭紝浠ュ強锛屾垜浼氭妸瀵笴UDA鐨勬帉鎻℃斁鍦ㄨ繖涓儴鍒嗐傝繖涓妧鏈偣鐨勯噸瑕佹у彇鍐充簬浣犳槸鍚﹀笇鏈涜蛋鎬ц兘浼樺寲杩欐潯璺紝瑕佺悊瑙PU鐨勬ц兘锛屽氨蹇呴』娣卞埢鍦扮悊瑙d綋绯荤粨鏋勩傝绠楁満閲岄潰璁茬殑浣撶郴缁撴瀯锛屾垜鐞嗚В灏辨槸锛屽浜庤蒋浠惰璁℃湁甯姪鐨勶紝涓嶆秹鍙婂叿浣撶數璺璁$殑鎵鏈夌‖浠剁粏鑺傘傚湪NV鐨勭敓鎬侀噷锛孋UDA鍏跺疄灏辨槸GPU浣撶郴缁撴瀯鐨勮蒋浠舵娊璞°傛墍浠ワ紝浜嗚ВCUDA鏄簡瑙PU鐨勭獥鍙o紝鑰岃繘涓姝ヤ簡瑙PU鐨勪綋绯荤粨鏋勶紝鍏跺疄灏辨槸鍦ㄦ洿娣卞埢鍦扮悊瑙UDA銆傝繘涓姝ヨ锛岃鐭ュ缓绔嬬殑姝ラ搴旇鏄紝鍏堜粠鍔熻兘鎬х殑瑙掑害杩囦竴閬岰UDA锛屾湁涓涓暣浣撴х殑锛屾鏋舵х殑璁ょ煡锛涚劧鍚庡涔燝PU鐨勪綋绯荤粨鏋勶紝鑳藉惁鍏呭垎鍦板彂鎸ョ‖浠舵ц兘锛屾湰璐ㄤ笂鍦ㄤ簬鑳藉惁鍏呭垎鍒╃敤纭欢鐨勫苟鍙戞у拰灞閮ㄦэ紝鎵浠ュGPU浣撶郴缁撴瀯鐨勬帉鎻★紝闇瑕佺煡閬撴瘡涓涓彲浠ユ彁楂樺苟鍙戞у拰灞閮ㄦх殑纭欢鐗规э紱鏈鍚庝竴涓楠ゆ槸甯︾潃瀵圭‖浠剁殑鐞嗚В锛屽啀鍥炲埌CUDA锛岀悊瑙e浣曢氳繃CUDA鏉ュ埄鐢ㄥ悇绉嶇‖浠剁壒鎬э紝鎴栬呰瑙勯伩鍚勭鎬ц兘闄烽槺銆傝繖閲屾帹鑽愪笁涓涔犺祫婧愶紝鍒嗗埆瀵瑰簲涓婅堪璇寸殑涓変釜姝ラ锛岀涓涓槸NV瀹樻柟鐨凜UDA Sample, 绗簩涓槸鏅箖宄帮紝鏌櫠鍜屾鏅撳常鐨勩婇氱敤鍥惧舰澶勭悊鍣ㄨ璁°嬶紝绗笁涓槸瀹樻柟鐨凜UDA Programming Guide銆傚鏋滆繕鎯崇户缁繁鍏ョ殑璇濓紝鍙互缁х画瀛︿範PTX锛屽鏋滆繕鎯冲涔犳寚浠ら泦锛圫ASS)锛屼篃璁稿彲浠ョ湅涓涓嬪紑婧愮殑閫嗗悜宸ョ▼鍋氱殑姹囩紪鍣紝姣斿鍟嗘堡鏈杩戝紑婧愮殑CUAssmbler銆
浠婂ぉ鏉ュ~涓涓嬮珮鎬ц兘璁$畻搴撶殑鍧戯紝鎵璋撻珮鎬ц兘璁$畻搴擄紝鎸囩殑鏄潰鍚戜竴绫昏绠楅棶棰樼殑kernel搴擄紝姣斿cublas, cutlass, cutensor, cub锛堣繖閲屾垜娌℃湁鎻恈udnn锛宑udnn鏈杩戝嚑涓増鏈殑婕斿寲宸茬粡鏈夌偣浠庣畻瀛愬簱鍚戜竴涓鐞嗗眬閮ㄥ浘鐨勬鏋惰蛋鐨勬劅瑙変簡锛夈傚紑鍙戦珮鎬ц兘绠楀瓙搴撳ぇ姒傞渶瑕佷笁灞傝兘鍔涖
绗竴灞傛槸濡備綍閽堝鐗瑰畾鐨勯棶棰樺啓鍑洪珮鏁堢殑kernel锛屼互GEMM涓轰緥锛屼笉鍚岀殑problem size(M,N,K),鏁版嵁绫诲瀷锛宭ayout鍦ㄤ笉鍚岀殑鍗′笂閬囧埌鐨勭摱棰堬紝鍜屽搴旂殑鏈浼樺疄鐜版柟寮忛兘鏄笉涓鏍风殑銆傛墍浠ヤ綘棣栧厛寰楁帉鎻′娇鐢ㄥ悇绉嶅伐鍏凤紙姣斿nsight compute锛夎繘琛屾ц兘鍒嗘瀽锛屾壘鍒版ц兘鐡堕鐨勬柟娉曘傜劧鍚庝綘寰楃煡閬撶畻瀛愬疄鐜扮殑鍚勭鈥滅帺娉曗濓紝姣斿鍚勭娴佹按缂栨帓锛屽悇绉嶄紭鍖栬瀛榩attern鐨勬墜娈点傛渶鍚庢暣涓祦绋嬬Н绱笅鏉ワ紝浣犳槸搴旇瀵硅繖涓畻瀛愭湁鎬ц兘寤烘ā鐨勮兘鍔涚殑銆
绗簩灞傛槸璁╃畻瀛愬簱鍙樺緱妯″潡鍖栧拰宸ョ▼鍖栵紝涓涓畻瀛愬簱鏄潰鍚戜竴绫昏绠楅棶棰樼殑杞欢锛屼笉鏄畝鍗曠殑kernel鐨勯泦鍚堬紝浣滀负涓涓垚鐔熺殑杞欢宸ョ▼锛岄珮鎬ц兘绠楀瓙搴撻渶瑕佸敖鍙兘灏嗗悇绉嶁滅帺娉曗濆彉鎴愬儚绉湪涓鏍峰彲浠ヨ嚜鐢辨嫾鎺ョ殑妯″潡锛屾渶澶х▼搴﹀湴瀹炵幇浠g爜澶嶇敤锛屽悓鏃朵篃鏂逛究瀵规暣涓唬鐮佸簱鐨勭鐞嗐傞櫎姝や箣澶栵紝鍚勭activation鎿嶄綔锛宎dd bias,sclase锛堝彲浠ョ粺绉癳pilogue锛変篃闇瑕佹ā鍧楀寲锛屽苟鍏佽琚玣used鍒癵emm/conv鐨刱ernel閲岄潰锛岃繖瀵逛笂灞傜殑鍥句紭鍖栨槸涓涓潪甯搁噸瑕佺殑鏀拺銆傚啀杩涗竴姝ワ紝浠g爜鏄鐨勶紝鍥㈤槦鏄椿鐨勶紝浠庣畻瀛愬簱鍥㈤槦鑰岃█锛屽簲璇ユ槸涓涓珮鎬ц兘绠楀瓙鐨勯珮鏁堢殑宸ュ巶锛屾柊鐨勯渶姹傝浆鍖栨垚鏂扮殑鍔熻兘锛屾垨鑰呮柊鐨刱ernel锛屾柊鐨刱ernel鎶曞叆娴嬭瘯锛岃В鍐冲洖褰掓祴璇曚腑鍙戠幇鐨勬ц兘闂锛屾瘡涓鐜殑鏁堢巼閮芥湁寰堝ぇ鐨勮绌躲
绗笁灞傛槸甯姪鐢ㄦ埛瑙e喅绠楀瓙閫夋嫨鐨勯棶棰橈紝杩欎釜闂鏈夌偣鍍忕浜屽眰鍋氬畬涔嬪悗鍙堝洖鍒扮涓灞傜殑闂锛屽氨鏄潰瀵逛竴涓叿浣撶殑鐢ㄦ埛case锛屾庝箞鎵惧埌閭d釜鏈浼樼殑瀹炵幇銆傝繖涓灞傚叾瀹炲凡缁忕畻鏄珮鎬ц兘绠楀瓙搴撳拰妗嗘灦/DL缂栬瘧鍣ㄧ殑浜ょ晫闂浜嗭紝鏈変簺绠楀瓙搴擄紝姣斿cublas锛屼細鏈塰euristic鏉ヤ竴瀹氱▼搴︿笂瑙e喅杩欎釜闂锛岃宑utlass鐩告瘮涔嬩笅灏辨槸涓涓洿骞插噣鐨勬ā鏉垮簱锛屽苟涓嶆墦绠楄В鍐宠繖涓棶棰樸
璁╂垜浠洖鍒版繁搴﹀涔犵紪璇戝櫒鐮斿彂宸ョ▼甯堢殑瑙嗚銆傛繁搴﹀涔犵紪璇戝櫒鍜岄珮鎬ц兘绠楀瓙搴撴湁涓ょ鍏崇郴锛
绗竴绉嶅叧绯绘槸娣卞害瀛︿範缂栬瘧鍣ㄦ湰韬氨鏄珮鎬ц兘鏇寸畻瀛愬簱锛屾繁搴﹀涔犵紪璇戝櫒寰寰浼氭湁鑷姩鐢熸垚楂樻ц兘绠楀瓙鐨勬ā鍧楋紙codegen锛夛紝鎴栬呮湁涓浜涜嚜宸卞啓鐨勯珮鎬ц兘kernel锛屽鏋滀綘鏄礋璐h繖閮ㄥ垎鐨勫伐绋嬪笀锛岄偅浣犲簲璇ュ氨寰楀畬鍏ㄥ叿澶囬珮鎬ц兘绠楀瓙搴撳伐绋嬪笀鐨勮兘鍔涗簡锛屼笉杩囩洰鍓嶆繁搴﹀涔犵紪璇戝櫒鐨刢odegen鍋氬緱姣旇緝濂界殑鍩烘湰涓婃槸pointwise杩欑被瀹炵幇璧锋潵鐩稿绠鍗曠殑绠楀瓙銆
绗簩绉嶅叧绯绘槸娣卞害瀛︿範缂栬瘧鍣ㄤ細璋冪敤澶栭儴鐨勯珮鎬ц兘绠楀瓙搴擄紝杩欎篃涓昏鏄潰瀵筭emm/conv杩欎簺璁$畻瀵嗛泦鍨嬶紝鐜╂硶姣旇緝澶氾紝闅惧害姣旇緝澶х殑绠楀瓙銆傝礋璐g鐞嗗澶栭儴搴撹皟鐢ㄧ殑宸ョ▼甯堝氨鍙互鏇存湁閽堝鎬у湴绉疮绗簩灞傚拰绗笁灞傜殑鎶鏈簡锛屽浜庣涓灞傛潵璇达紝鍙互鏆傛椂鍙渶瑕佷簡瑙f湁鍝簺鈥滅帺娉曗濓紝姣忕鈥滅帺娉曗濇槸涓轰簡瑙e喅浠涔堥棶棰橈紝瀵逛簬鍏蜂綋鐨勫疄鐜版柟娉曪紝浼樺厛绾у彲浠ョ◢绋嶅線鍚庢斁銆
鍕夊己鎶婅嚜宸卞畾涔変负鐩稿叧浜哄憳锛屼互涓汉宸ヤ綔鎯呭喌鏉ョ湅锛屼粠涓婂埌涓嬪彲鑳介渶瑕佷互涓嬬煡璇嗙偣:
- 甯歌鐨勬繁搴﹀涔犵畻娉曪紝妫娴嬶紝璇嗗埆锛屽垎鍓茬瓑绛夌殑鍩烘湰鍘熺悊锛
- 甯歌鐨勬ā鍨嬬粨鏋勬瘮濡傚嵎绉紝姹犲寲锛屾縺娲诲嚱鏁般傚吀鍨嬫ā鍨嬫瘮濡俶obile net锛宺esnet锛
- 鍩烘湰鐨勬ā鍨嬭缁冩祦绋嬶紝閮ㄥ垎妯″瀷璁粌鎶宸э紝妯″瀷缁撴瀯璋冩暣锛屾ā鍨嬬殑鍙傛暟鍥哄寲鍜屽鍑猴紱
- 鍦ㄥ師鐞嗗眰闈㈡垨鑰呯畻娉曞眰闈㈢殑妯″瀷閲忓寲宸ヤ綔锛
- 甯歌妯″瀷瀛樺偍鏍煎紡鐨勮В鏋愶紝渚嬪onnx鐨勬ā鍨嬬粨鏋勶紝鍙傛暟淇℃伅瑙f瀽锛
- 鑳借嚜瀹氫箟绠鍗曠殑妯″瀷ir锛屼繚璇佸彲浠ユ瘮杈冩纭殑瀛樺偍鍜屼紶閫掓ā鍨嬩俊鎭紱
- 鑳界悊瑙g‖浠剁殑璁捐锛岀煡閬撹璁″師鐞嗘洿浣筹紱
- 鍙互鏍规嵁纭欢闄愬埗鍋氭ā鍨嬬粨鏋勭殑铻嶅悎锛屾媶鍒嗭紝杞崲锛
- 鑳芥墜鍐欏父瑙佺畻瀛愮殑瀹氱偣璁$畻(鍔熻兘灞傞潰)锛
- 鑳界悊瑙g‖浠跺畾鐐硅绠楁柟娉曪紱
- 鐭ラ亾鍩烘湰鐨勪紭鍖栧櫒鏂规锛屽彲浠ユ瘮杈冨畯瑙傜殑鐞嗚В妯″瀷璁$畻娴佹按杩囩▼锛
- 鐭ラ亾dfs锛屽姩鎬佽鍒掔瓑鍩虹绠楁硶锛
- 鐭ラ亾浜岀淮灞傞潰鐨勫唴瀛樺垎閰嶏紝璁$畻鏃跺簭鍘熺悊锛
- 鐭ラ亾cost model姒傚康骞跺彲浠ュ涓嶅悓绠楀瓙瀹炵幇涓嶅悓鎯呭喌鐨刢ost model璁$畻锛
- 鑳界悊瑙g‖浠舵寚浠わ紝瀵勫瓨鍣ㄩ厤缃師鍒欙紝璋冨害鍘熷垯锛
- 鍙互鐞嗚В鍩烘湰鐨勮繍琛屾椂宸ヤ綔娴佺▼锛
- 鑳藉強鏃跺弸濂芥矡閫氱畝鍗曟荤粨浜嗕竴閮ㄥ垎鐭ヨ瘑鐐癸紝濡傞渶闈㈣瘯鍙互鎸夋鍑嗗锛岃嚦灏戝彲浠ヨ繘浜岄潰銆
绾镐笂寰楁潵缁堣娴咃紝缁濈煡姝や簨瑕佽含琛屻傚弬鑰僒VM鏂囨。鍚慠elay娣诲姞鏂扮畻瀛愶紙 TVM瀵瑰簲鏂囨。锛歨ttps://tvm.apache.org/docs/dev/how_to/relay_add_op.html锛夛紝鏂囨。涓渷鍘讳簡璁稿缁嗚妭锛屽湪鏈枃涓皢瀹炶返鍚慠elay娣诲姞鏂扮畻瀛愶紝骞跺皢缁嗚妭涓涓鍒楀嚭锛岀粰鍑烘秹鍙婄殑鏂囦欢鍦板潃銆佹柊澧炰唬鐮併佹祴璇曡剼鏈
绠楀瓙瀹氫箟
LayerNorm鐢变簬鍏舵帹鐞嗘椂鍦ㄧ嚎璁$畻鍧囧煎拰鏂瑰樊鐨勭壒鎬э紝浣垮緱鍏惰繍琛屾椂寮閿杈冨ぇLayerNorm璁$畻鐗规у拰閮ㄧ讲浼樺寲銆備负浜嗗噺灏忓紑閿锛屽叾涓竴绉嶆柟娉曟槸閲囩敤鏂扮殑褰掍竴鍖栨柟娉曟浛浠ayerNorm銆俁MSNorm灏辨槸涓涓彲琛岀殑鐮旂┒宸ヤ綔銆
RMSNorm璁烘枃锛歓hang B, Sennrich R. Root mean square layer normalization[J]. Advances in Neural Information Processing Systems, 2019, 32
瀵筁ayerNorm鎴愬姛鐨勪竴涓憲鍚嶇殑瑙i噴鏄畠鐨勯噸鏂板畾蹇冨拰閲嶆柊缂╂斁鐨勪笉鍙樻с傚墠鑰呬娇妯″瀷瀵硅緭鍏ュ拰鏉冨间笂鐨勭Щ浣嶅櫔澹颁笉鏁忔劅锛屽綋杈撳叆鍜屾潈鍊奸兘琚殢鏈虹缉鏀炬椂锛屽悗鑰呬繚鎸佽緭鍑鸿〃绀虹殑瀹屾暣銆俁MSNorm璁烘枃鍋囪閲嶆柊缂╂斁涓嶅彉鎬ф槸LayerNorm鎴愬姛鐨勫師鍥狅紝鑰屼笉鏄噸鏂板畾蹇冧笉鍙樻с
RMSNorm鍙叧娉ㄩ噸鏂扮缉鏀句笉鍙樻э紝骞舵牴鎹潎鏂规牴锛圧MS锛夌粺璁¢噺鏉ュ綊涓鍖栥傜浉瀵逛簬LayerNorm锛屽垹闄や簡鍏充簬鍧囧肩殑缁熻銆
RMSNorm璁$畻鍏紡濡備笅锛
瀹氫箟灞炴ц妭鐐
灞炴ф槸缂栬瘧鏃跺凡鐭ョ殑鍥哄畾鍙傛暟銆傚畾涔変竴涓睘鎬х粨鏋勪綋鏉ユ弿杩扮畻瀛愮殑灞炴с備緥濡侰onv2d绠楀瓙锛宻tride銆乸adding銆乨ilation銆乲ernel_size绛変负鍏跺睘鎬
鍦╰vm/include/tvm/relay/attrs/nn.h娣诲姞浠ヤ笅浠g爜锛
/*! \brief Attributes used in RMSNorm operator */
struct RMSNormAttrs : public tvm::AttrsNode<RMSNormAttrs> {
int axis;
double epsilon;
bool scale;
TVM_DECLARE_ATTRS(RMSNormAttrs, "relay.attrs.RMSNormAttrs") {
TVM_ATTR_FIELD(axis).set_default(-1).describe("Specify which shape axis denotes the channel.");
TVM_ATTR_FIELD(epsilon).set_default(1e-5).describe(
"Small float added to variance to avoid dividing by zero");
TVM_ATTR_FIELD(scale).set_default(true).describe(
"If true, multiply by gamma; otherwise, gamma is ignored.");
}
}; // struct RMSNormAttrs
缂栧啓绫诲瀷鍏崇郴
鍦ㄧ紪璇戞椂闇瑕佸绠楀瓙鐨勮緭鍏ャ佽緭鍑虹殑绫诲瀷杩涜妫鏌ワ紝骞跺绠楀瓙鐨勮緭鍏ャ佽緭鍑虹被鍨嬩箣闂寸殑鍏崇郴杩涜绫诲瀷鍖栥傝繖浜涘叧绯昏琛ㄧず涓哄嚱鏁帮紝瀹冩帴鏀朵竴涓緭鍏ョ被鍨嬪拰杈撳嚭绫诲瀷鐨勫垪琛紙杩欎簺绫诲瀷涓殑浠讳綍涓涓兘鍙兘鏄笉瀹屾暣鐨勶級锛屽苟杩斿洖涓涓弧瓒冲叧绯荤殑杈撳叆鍜岃緭鍑虹被鍨嬬殑鍒楄〃銆傝繖鍖呮嫭褰㈢姸淇℃伅锛屽彲浠ュ湪缂栬瘧鏃堕潤鎬佸湴纭畾銆
鍦╰vm/src/relay/op/nn/nn.cc娣诲姞浠ヤ笅浠g爜锛
// rms_norm 娉ㄥ唽灞炴ц妭鐐
TVM_REGISTER_NODE_TYPE(RMSNormAttrs);
//绫诲瀷妫鏌 褰㈢姸鎺ㄧ悊
bool RMSNormRel(const Array<Type>& types, int num_inputs,const Attrs& attrs, const TypeReporter& reporter){
ICHECK_EQ(types.size(),3); // [data,gamma,output] 鏁伴噺涓鸿緭鍏+杈撳嚭
const auto* data=types[0].as<TensorTypeNode>();
if(data==nullptr) return false;
const RMSNormAttrs* param = attrs.as<RMSNormAttrs>();
int axis = param->axis>=0 ? param->axis: param->axis+data->shape.size(); //axis鍙兘鏄痯ython椋庢牸鐨勮礋鏁 濡-1
ICHECK(axis >= 0 && axis < (int)data->shape.size());
reporter->Assign(types[1], TensorType({data->shape[axis]}, data->dtype)); //gamma鐨剆hape涓巃xis鎵鍦╯hape鐩稿悓
reporter->Assign(types[2], TensorType({data->shape}, data->dtype));
//output鐨剆hape涓巇ata鐨剆hape鐩稿悓
return true;
}
绠楀瓙涓庡睘鎬у叧鑱
娉ㄥ唽绠楀瓙鐨勫悕绉板強鍏朵粬鎻忚堪锛屽苟鐢ㄨ皟鐢ㄦ帴鍙h繘琛屾爣娉ㄣ
鍦╰vm/src/relay/op/nn/http://nn.cc娣诲姞浠ヤ笅浠g爜锛
//鏍规嵁杈撳叆涓庡睘鎬ц皟鐢ㄧ畻瀛
Expr MakeRMSNorm(Expr data, Expr gamma,int axis,double epsilon,bool scale) {
auto attrs=make_object<RMSNormAttrs>();
attrs->axis=axis;
attrs->epsilon=epsilon;
attrs->scale=scale;
static const Op& op = Op::Get("nn.RMS_norm");
return Call(op,{data,gamma}, Attrs(attrs), {});
}
TVM_REGISTER_GLOBAL("relay.op.nn._make.RMS_norm").set_body_typed(MakeRMSNorm);
//娉ㄥ唽绠楀瓙
RELAY_REGISTER_OP("nn.RMS_norm")
.describe(R"code(
RMSNorm: It is a replacement of LayerNorm.
Zhang B, Sennrich R. Root mean square layer normalization[J]. Advances in Neural Information Processing Systems, 2019, 32
)code" TVM_ADD_FILELINE)
.set_attrs_type<RMSNormAttrs>()
.set_num_inputs(2)
.add_argument("data","Tensor","Input to which RMS_norm will be applied")
.add_argument("gamma","Tensor","The gamma scale factor.")
.set_attr<FInferCorrectLayout>("FInferCorrectLayout", NormalizationInferCorrectLayout<RMSNormAttrs>)
.set_support_level(1)
.add_type_rel("RMSNorm",RMSNormRel);
瀹氫箟绠楀瓙鐨勮绠
TVM鐨凾OPI绠楀瓙搴撳寘鍚涓悗绔殑绠楀瓙鐨勮绠椾笌璋冨害瀹氫箟锛屽湪杩欓噷杩涜绠楀瓙鍦≒ython绔敞鍐屻
鏂板tvm/python/tvm/topi/nn/RMS_norm.py鏂囦欢锛岀紪鍐欏涓嬩唬鐮侊細
鍒╃敤TVM鐨勮法璇█璋冪敤鏈哄埗锛屽皢RMSNorm鐨勮绠楀畾涔夌紪鍐欏湪CPP绔紝鍦≒ython绔彁渚涙帴鍙
"""RMS normalization operator."""
from .. import cpp
def RMS_norm(data, gamma, axis, epsilon=1e-5):
"""RMS normalization operator.
Parameters
----------
data : tvm.te.Tensor
N-D with shape (d_0, d_1, ..., d_{N-1})
gamma: tvm.te.Tensor
K-D with shape (r_0, r_1, ..., r_{K-1}) where K == len(axis) and d_{axis_k} == r_k
axis : list of int
Axis over the normalization applied
epsilon : float
The epsilon value to avoid division by zero.
Returns
-------
result : tvm.te.Tensor
N-D with shape (d_0, d_1, ..., d_{N-1})
"""
return cpp.nn.RMS_norm(data, gamma, axis, epsilon)
娉ㄦ剰鍦╰vm/python/tvm/topi/nn/_ init _.py瀵煎叆绠楀瓙
鏂板tvm/include/tvm/topi/nn/RMS_norm.h鏂囦欢锛岀紪鍐欏涓嬩唬鐮侊細
璇ヤ唬鐮佹弿杩颁簡RMSNorm鐨勮绠楁祦绋
/*!
* \brief RMS normalization op constructions
* \file nn/RMS_norm.h
*/
#ifndef TVM_TOPI_NN_RMS_NORM_H_
#define TVM_TOPI_NN_RMS_NORM_H_
#include <tvm/te/operation.h>
#include <tvm/topi/tags.h>
#include <string>
namespace tvm {
namespace topi {
namespace nn {
using namespace tvm::te;
/*!
* \brief RMS normalization.
* \param data N-D tensor with shape [d_0, d_1, ..., d_{N-1}]
* \param gamma K-D tensor with shape [r_0, r_1, ..., r_{K-1}] where K == len(axis) and
* d_{axis_k} == r_k
* \param axis The axis to normalize over.
* \param epsilon The epsilon value to avoid division by zero.
* \param name The name of the operation.
* \param tag The tag to mark the operation.
* \return The normalized tensor, with the same shape as data.
*/
inline Tensor RMS_norm(const Tensor& data, const Tensor& gamma,
const Array<Integer>& axis, double epsilon,
std::string name = "T_RMS_norm", std::string tag = kInjective) {
// sum x^2
auto ndim = data->shape.size();
ICHECK_NE(ndim, 0) << "Cannot reduce a 0 dim Tensor";
auto real_axis = GetRealAxis(static_cast<int>(ndim), axis);
auto reduce_axes = MakeReduceAxes(real_axis, data);
auto target_shape =
MakeReduceTargetShape(real_axis, data, /*keepdims=*/false, /*atleast1d=*/true);
auto func = MakeTupleSumReducer();
auto compute = [ndim, &real_axis, &reduce_axes, &func, &data](const Array<Var>& indices) {
Array<PrimExpr> eval_range;
int arg_counter = 0;
int red_counter = 0;
for (size_t i = 0; i < ndim; ++i) {
if (std::find(real_axis.begin(), real_axis.end(), i) != real_axis.end()) {
// real_axis contains i
eval_range.push_back(reduce_axes[red_counter]);
red_counter++;
} else {
eval_range.push_back(indices[arg_counter]);
arg_counter++;
}
}
auto square = [](const PrimExpr& x) { return x * x; };
return func({data(eval_range), square(data(eval_range))}, reduce_axes, nullptr);
};
auto temp_x_x2 =
tvm::te::compute(target_shape, compute, data->op->name + "_red_temp", kCommReduce);
//鑾峰緱骞虫柟鍜
auto temp_x2 = temp_x_x2[1];
//骞虫柟鍜屾眰鍧囧兼椂瑕侀櫎浠ョ殑鍏冪礌鐨勬暟閲
auto reduce_extent = make_const(data->dtype, 1);
for (int i : real_axis) {
reduce_extent *= data->shape[i];
}
auto RMS_norm_func = [&](const Array<Var>& indices) {
Array<Var> reduce_indices, non_reduce_indices;
for (int i = 0, n = static_cast<int>(indices.size()); i < n; ++i) {
if (std::find(real_axis.begin(), real_axis.end(), i) != real_axis.end()) {
reduce_indices.push_back(indices[i]);
} else {
non_reduce_indices.push_back(indices[i]);
}
}
auto var = temp_x2(non_reduce_indices) / reduce_extent ;
auto RMS_norm = data(indices) * tvm::rsqrt(var + make_const(var->dtype, epsilon)); //tvm::rsqrt 鍗 1/tvm::sqrt
RMS_norm = topi::multiply(RMS_norm, gamma(reduce_indices));
return RMS_norm;
};
return tvm::te::compute(data->shape,RMS_norm_func, name, tag);
}
} // namespace nn
} // namespace topi
} // namespace tvm
#endif // TVM_TOPI_NN_RMS_NORM_H_
鍦╰vm/src/topi/http://nn.cc澧炲姞濡備笅浠g爜锛
娉ㄥ唽topi绠楀瓙搴撶殑CPP绔
// 娉ㄦ剰寮曞叆涓嬮潰鐨勫ご鏂囦欢
#include <tvm/topi/nn/RMS_norm.h>
/* Ops from nn/RMS_norm.h */
TVM_REGISTER_GLOBAL("topi.nn.RMS_norm").set_body([](TVMArgs args, TVMRetValue* rv) {
*rv = nn::layer_norm(args[0], args[1], args[2], static_cast<double>(args[3]));
});
鎻愪緵Python API
鍦╰vm/python/tvm/relay/op/nn/nn.py澧炲姞濡備笅浠g爜锛
def RMS_norm(data,gamma,axis=-1,epsilon=1e-5,scale=True):
return _make.RMS_norm(data,gamma,axis,epsilon,scale)
缂栧啓娴嬭瘯鏂囦欢
鏂板tvm/python/tvm/topi/testing/RMS_norm_python.py锛岀紪鍐欏涓嬩唬鐮侊細
璇ユ枃浠舵槸鐢ㄤ簬topi绠楀瓙鐨勬祴璇曪紝浣跨敤numpy缂栧啓锛屽湪娴嬭瘯鏃朵綔涓烘爣鍑嗙瓟妗
import numpy as np
from functools import reduce
def RMS_norm_python(data, gamma, axis, epsilon=1e-5):
"""RMS normalization operator in Python.
Parameters
----------
data : numpy.ndarray
N-D with shape (d_0, d_1, ..., d_{N-1})
gamma: numpy.ndarray
K-D with shape (r_0, r_1, ..., r_{K-1}) where K == len(axis) and d_{axis_k} == r_k
axis : int or tuple of ints
Axis over the normalization applied
epsilon : float
The epsilon value to avoid division by zero.
Returns
-------
result : np.ndarray
N-D with shape (d_0, d_1, ..., d_{N-1})
"""
if len(axis)==1:
n=data.shape[axis[0]]
else:
n=reduce(lambda x,y:data.shape[x]*data.shape[y],axis)
temp=np.sum(np.square(data),axis) / n
temp=np.repeat(temp,axis=0,repeats=n).reshape(data.shape)
result = data/ np.sqrt(temp + epsilon)
result *= gamma
return result
娉ㄦ剰鍦╰vm/python/tvm/topi/testing/init.py瀵煎叆锛屽鍏ヤ唬鐮佸涓
from .RMS_norm_python import RMS_norm_python
鏂板tvm/tests/python/topi/python/test_topi_RMS_norm.py锛岀紪鍐欏涓嬩唬鐮
"""Test code for RMS_norm."""
import numpy as np
import pytest
import tvm
from tvm import te
from tvm import topi
from tvm.topi.utils import get_const_tuple
import tvm.topi.testing
import tvm.testing
# 浣跨敤閫氱敤鐨刬njective璋冨害
_RMS_norm_schedule = {
"generic": topi.generic.schedule_injective,
}
# 瀵规渶鍚庝竴缁村拰鏈鍚庝袱缁村垎鍒祴璇
@tvm.testing.parametrize_targets("llvm")
@pytest.mark.parametrize("shape,axis", [([4, 16], (1,)), ([4, 16, 16], (1, 2))])
def test_layer_norm(target, dev, shape, axis, episilon=1e-5, dtype="float32", rtol=1e-5, atol=1e-5):
data = te.placeholder(shape, dtype=dtype, name="data")
scale_shape = [shape[dim] for dim in axis]
gamma = te.placeholder(scale_shape, dtype=dtype, name="gamma")
B = topi.nn.RMS_norm(data, gamma, axis, episilon)# 璋冪敤TOPI绠楀瓙搴撲腑鐨凴MSNorm
data_np = np.random.uniform(size=shape).astype(dtype)
gamma_np = np.random.uniform(size=scale_shape).astype(dtype)
beta_np = np.random.uniform(size=scale_shape).astype(dtype)
b_np = tvm.topi.testing.RMS_norm_python(data_np, gamma_np, axis, episilon) # 璋冪敤numpy缂栧啓鐨凴MSNorm浣滀负鏍囧噯绛旀
with tvm.target.Target(target):
s_func = tvm.topi.testing.dispatch(target, _RMS_norm_schedule)
s = s_func([B])
data_tvm = tvm.nd.array(data_np, dev)
gamma_tvm = tvm.nd.array(gamma_np, dev)
b_tvm = tvm.nd.array(np.zeros(get_const_tuple(B.shape), dtype=dtype), dev)
f = tvm.build(s, [data, gamma, B], target)
f(data_tvm, gamma_tvm, b_tvm)
tvm.testing.assert_allclose(b_tvm.asnumpy(), b_np, rtol=rtol, atol=atol)
if __name__ == "__main__":
tvm.testing.main()
鎵ц娴嬭瘯
鐢变簬娑夊強鍒癈PP浠g爜淇敼锛屽洜姝ら渶瑕侀噸鏂癿ake鏁翠釜椤圭洰鐢熸垚鏇存柊鐨勫姩鎬侀摼鎺ュ簱銆
涔嬪悗鎵цtvm/tests/python/topi/python/test_topi_RMS_norm.py娴嬭瘯鏂囦欢锛屽緱鍒颁互涓嬬粨鏋滐細琛ㄧず娴嬭瘯鎴愬姛
鎬荤粨
TVM涓畻瀛愮殑娉ㄥ唽骞朵笉灞闄愯繖涓绉嶆柟娉曪紝渚嬪瀵逛簬BatchNorm锛屽苟娌℃湁鍦–PP绔弿杩板叾璁$畻锛岃屾槸鍦╬ython绔畾涔夛紙tvm/topi/nn/batch_norm.py锛夛紝瀵逛簬鍏惰皟搴︼紝涔熸湁涓嶅悓鍚庣鐨勫涓疄鐜般
鏈枃鍙寘鍚簡缂栬瘧鍣ㄤ腑鏂板绠楀瓙锛屽悗缁皢鏇存柊鍦≒ytorch涓畾涔塕MSNorm銆佸鍑轰负ONNX銆佸埄鐢≧elay鍓嶇瀵煎叆鐨勬祦绋嬨傦紙鎸栧潙锛氾級
鐩稿叧浠庝笟鑰咃紝鎸佺画琛ュ厖鍩虹鐭ヨ瘑锛屽叡鍕夛細
- mlir锛弆lvm锛堝緢澶氭嫑鑱樿亴浣嶄笂閮芥槑纭啓鐫鏈塋LVM銆丮LIR寮鍙戠粡楠岃呬紭鍏堣冭檻锝烇級
- pytorch锛弔ensorflow绛変竴涓垨澶氫釜娣卞害瀛︿範妗嗘灦锛屼細鑷繁鏋勯犱竴浜涘皬妯″瀷璺戞祴璇
- onnx
- 璇█锛歱ython锛孋锛嬶紜锛圕锛嬶紜11锛屾ā鏉匡級
- 鏁板锛氱嚎鎬т唬鏁帮紙鐭╅樀涔樻硶锛屽嵎绉殑鍩虹锛
- 娣卞害瀛︿範鍩虹鐭ヨ瘑锛氬嵎绉佹睜鍖栥佹縺娲汇佸墠鍚戜紶鎾弽鍚戜紶鎾瓑鍩虹鎿嶄綔锛孯esnet銆丮obileNet銆両nception銆乿gg銆亂olo绛夌粡鍏哥綉缁滅郴鍒楃瓑
- 缂栬瘧鍘熺悊锛孖R锛廝ass锛堢紪璇戝櫒鐨勪紭鍖栨槸鏃犳澧冪殑锛屽埌鍚庢湡浼氱湡姝d綋浼氬埌锛
- 鎿嶄綔绯荤粺鍘熺悊锛氳瀛樸佸瓨鍌ㄥ眰娆$粨鏋
- 璁$畻鏈轰綋绯荤粨鏋勶細鎸囦护闆嗭紙鏈夌殑绠楀瓙鍙戝皠鍙兘娑夊強鍒帮級銆佹祦姘寸嚎
- 鍏朵粬锛歡it绠$悊锛弆inux甯哥敤鍛戒护
- 濡傛灉鏄仛CodeGen锛堝嵆绠楀瓙鍙戝皠锛夎繖涓鍧楋紝鍒欓渶瑕佷簡瑙PU锛圖PU銆丟PU銆侊伎PU锛夋垨鑰匰oC纭欢妯″潡鍗曞厓鍔熻兘銆佸瘎瀛樺櫒閰嶇疆銆丏DR锛廠RAM璁垮瓨鐗规х瓑纭欢鐗规с佽繍绠楁祦绋嬪拰闄愬埗绛夌煡璇嗐
- 濡傛灉鏄仛鍓嶇锛屽垯闇瑕佷簡瑙la銆乷nnx銆乵lir绛
鎺ㄨ崘涔︾睄锛氥婃櫤鑳借绠楃郴缁熴
鐐瑰嚮馃憞鍗$墖鍏虫敞鎴戯紝绗竴鏃堕棿鑾峰彇骞茶揣锝