英语表述

表述

  1. For each UAP, we evaluate its fooling rate on the model it was generated from (white-box attack) and on the three remaining models (transfer attack).
  2. Our results show that models trained on Stylized-ImageNet are still as vulnerable to these UAPs as models trained on ImageNet.
  3. For transfer attacks, where the UAP is generated from a model different from the evaluated one, the fooling rate consistently rises for “ > 4.
  4. Particularly, in case of critical applications that involve safety and security, reliable models need to be deployed to stand against the strong adversarial attacks. Thus, the effect of these structured perturbations has to be studied thoroughly in order to develop dependable machine learning systems. 需要研究对抗样本的原因
  5. We train G in order to be able to generate the UAPs that can fool the target classifier over the underlying data distribution.
  6. Hence, diagonal entries denote the white-box adversarial attacks and the off diagonal entries denote the black-box attacks.
  7. It computes the gradient of the loss function with respect to pixels, and moves a single step based on the sign of the gradient.
  8. To verify this latter claim
  9. universal perturbations computed for the VGG-19 network have a fooling ratio above 53% for all other tested architectures.
  10. Adversarial samples also possess the characteristic of transferability, which means adversarial sample generated for attacking one model could also mislead another model.
  11. In this paper we present the first comprehensive evaluation of transferability of evasion and poisoning availability attacks,
  12. as formulated in [42] and in follow-up work (e.g., [33]).
  13. It is not difficult to see that, for …
  14. In Fig. 6a we report the mean test error at e=1 for each target model against the size of its input gradients (S, averaged on the test samples and on the 10 runs).
  15. A visual inspection of the poisoning digits 中毒数字的目视检查
  16. The value in the i-th row and j-th column of each heatmap matrix is the proportion of the adversarial examples successfully transferred to target model j out of all adversarial examples generated by source model i (including both successful and failed attacks on the source model). 混淆矩阵的解释
  17. In order to assess texture and shape biases, we conducted six major experiments along with three control experiments, which are described in the Appendix.
  18. more fine-grained statement
  19. We view bridging this gap as an interesting direction for future work.
  20. We focus on fog in this work, due to the availability of relevant data, but our framework is generalizable and can be easily extended to other weather conditions.)
  21. We experimentally analyze Vision Transformers to answer this question.
  22. We urge interested readers to examine the supplemental material where we provide descriptions of each attack. 引用他们文章或意见时,推荐读者去看论文原文的写法
  23. We demonstrate the SAGA results by attacking a simple ensemble of Vision Transformers and Big Transfer Models for CIFAR-10, CIFAR-100 and ImageNet. 针对不同数据,模型的写法
  24. For ImageNet, we use Bit-M-R152x4 and ViT-L-16
  25. This intriguing phenomenon of human imperceptible perturbation fooling the DNN has inspired active research for studying the model robustness against adversarial attack techniques
  26. has introduced an efficient one-step attack method, widely known as the Fast Gradient Sign Method (FGSM). [27] has extended the basic FGSM to its iterative variant, i.e. I-FGSM,
  27. For consistency we follow prior works [38, 46, 41] and adopt l∞ = 10/255. Unless otherwise specified, the UAP in this work is by default untargeted
  28. Concurrent to [46], [20] adopts a similar approach.
  29. Our results align well with [61] that identifies frequency as a factor for the class-wise robustness gap against targeted UAP
  30. Overall, there is a general consensus among the UAP researchers that crafting UAP with limited or no data is challenging.
  31. For a detailed description of their various methods, we refer the readers to [31].
  32. Universal adversarial perturbation (UAP), i.e. a single perturbation to fool the network for most images, is widely recognized as a more practical attack because the UAP can be generated beforehand and applied directly during the attack stage. 通用对抗扰动的描述
  33. image-agnostic perturbations that cause most natural images to be misclassified.
  34. Universal adversarial perturbations exhibit many interesting properties such as their universality across networks, which means that a perturbation constructed using one DNN will perform relatively well for other DNNs.
  35. Our analysis with UAPs reveals the extent to which increased shape-bias improves adversarial robustness of models to universal attacks. 我们对 UAP 的分析揭示了增加的形状偏差在多大程度上提高了模型对普遍攻击的对抗性鲁棒性。
  36. According to our experiments, a small number K, for example, k = 2 or 3, will be sufficient.
  37. we omit the detailed description of the model architecture and refer the readers to
  38. The PASCAL VOC dataset contains 20 object categories. The majority of images in the PASCAL VOC dataset have 1 to 5 object instances, on average, 1.4 categories and 2.3 instances per image. The MS COCO dataset contains 80 object categories. Images in this dataset have more object instances, on average, 3.5 categories and 7.7 instances per image.
  39. Its core idea is that,
  40. With the progress of adversarial attacks, 随着对抗攻击的进展
  41. Although many certified adversarial defense methods have reported substantial advances in adversarial robustness, recent works [19, 9, 3] report that the gradient masking may present a false sense of security, i.
  42. Despite a large literature devoted to improving the robustness of deep-learning models, many fundamental questions remain unresolved. 虽然有发展,但是还有很多问题待解决
  43. The regularization parameter ( is an important hyper-parameter in our proposed method. We show how the regularization parameter affects the performance of our robust classifiers by numerical experiments on two datasets, MNIST and CIFAR10. 某个超参的重要性
  44. The figure shows the classification accuracy rates (percentage) of ResNet against different attacks (higher is better) on CIFAR-10.
  45. Since the widespread types of attacks in the real world are very diverse, the adversarial examples used in the training procedure are often biased.
  46. the delicately crafted special noise 精心打造的特殊噪音
  47. However, the existence of adversarial examples has raised concerns about the security of computer vision systems
  48. To address security concerns for high-stakes applications, researchers are searching for ways to make models more robust to attacks
  49. Δ is the gradient of y with respect to x:
  50. We demonstrate the effectiveness of AdvDrop by extensive experiments,
  51. The proposed adversarial example detection approach is evaluated against six state-of-the-art adversarial example generation methods: FGSM (L), PGD (L), CW (L), AutoAttack (L), Square (L), and boundary attack. All attacks are implemented as “non-targeted” attacks on three datasets: CIFAR-10 [5], ImageNet dataset [6], and STL- 10 [3].
  52. In this setting, the adversary is aware of the different steps involved in the adversarial defense method but does not have access to the method’s parameters.
  53. The LID and the proposed detection method are trained and tested using the same adversarial attack methods, except for CWwb attack, where the detector is trained on the traditional CW attack.
  54. Each adversarial example detection method is trained using CW attack, and evaluated on other attacks.
  55. The objective of this experiment is to investigate the impact of graph topology on detection performance
  56. Our work is motivated by this observation, and we proposed an algorithm to leverage these robust features.
  57. For the DAN algorithm, improvement in robustness is 0% to 22.11% at the cost of an 18% drop in clean accuracy.
  58. Similar improvement in robustness is also visible in experiments involving Office-31 and Office-Home datasets as shown in Table 1.
  59. We choose it based on the magnitude of domain adaptation loss. 对于损失参数设置的解释
  60. RFA outperforms both Baseline and Robust Pre-Training with significant margins.
  61. The attention mechanism plays a critical role in human visual system and is widely used in a variety of application tasks
  62. AGKD-BML overall outperforms the comparison methods on CIFAR-10.
  63. AGKDBML also shows better adversarial robustness on SVHN dataset with a large margin.
  64. The successes achieved in deep learning areas have made great improvements for CMHR.
  65. The lower performance means better attack capability.
  66. we conduct an extensive amount of experiments to demonstrate the efficacy of AdvRush on various benchmark datasets.
  67. there remains one important question that is yet to be explored extensively
  68. we warm up and ! of the supernet without L. Upon the completion of the warm-up process, we introduce L for additional epochs
  69. Please refer to the Appendix for the visualization of each architecture.
  70. This may impede their practical deployment for training robust models and efficiently evaluating robustness.
  71. 1We tried to include the B&B ℓ1 and ℓ2 attacks [5] in our comparison, but their official implementations kept crashing.
  72. Investigating the original code, we found errors in the implementation of the CIEDE2000, resulting in both wrong values and wrong gradients.
  73. our C&W baseline gets performance that is on par or better than ALMA LPIPS on both datasets
  74. In recent years, research on adversarial attacks has become a hot spot.
  75. Unless mentioned, all experiments in this section are based on the integration of TI-DIM [9] method with our proposed architecture.
  76. For a comprehensive review, we refer readers to [3]
  77. SimCat robustness to adaptive PGD-ℓ2 over epochs of adversarial training with varied values of the hyperparameter β, which controls the momentum updates. Higher
  78. Note that Y and y are equivalent notations of the truth labels of x.
  79. Higher values of ASR indicate the corresponding method has a high attacking performance.
  80. we introduce a new VQA benchmark dynamically collected with a Human-and-Model-in-the-Loop procedure.
  81. natural domain versus adversarial domain 自然域与对抗域
  82. where small perturbations on the input can easily subvert the model’s prediction.
  83. raising safety concerns in autonomous driving (Eykholt et al., 2018) and medical diagnosis
  84. All parameters are determined by grid search.
  85. A natural question arises whether it is possible to craft a universal perturbation that fools the network only for certain classes while having minimal influence on other classes.
  86. We quantify how the performance benefits of transferring features decreases the more dissimilar the base task and target task are.
  87. on the spot 立刻,当场;在危险中;处于负责地位
  88. a plausible explanation is that
  89. Although feature squeezing generalizes to other domains, here we focus on image classification. Because it is the domain where adversarial examples have been most extensively studied.
  90. Their method requires a large set of both adversarial and legitimate inputs and is not capable of detecting individual adversarial examples, making it not useful in practice.实际上,事实上;在实践中
  91. It is computationally expensive and can only detect adversarial examples lying far from the manifolds of the legitimate population.
  92. Given unlimited perturbation bounds, one could always force a model to misclassify an example into any class, but often by producing images that are unrecognizable or obviously suspicious to humans.
  93. There has been substantial work demonstrating that
  94. We perform a systematic empirical study on various defenses.
  95. If these intermediate defender states are “leaked” to the attacker, we call it intermediate defender states leaking, or simply states leaking, otherwise we call it non states-leaking, or simply non-leaking.
  96. Countless real-world applications involve streaming data that arrive in an online fashion
  97. Our first insight elucidates that
  98. For the sake of convenience, the names of the networks are shortened as ResNet, Inception, DenseNet, MobileNet, SENet, and PNASNet in the rest of the paper.
  99. We use the publicly available differentiable renderer from Kato et al. [11] and although they introduced their method for color and silhouettes only, we extend it for rendering depth maps in a differentiable way.
  100. Alongside works endeavoring to explain adversarial examples, others have proposed defenses in order to increase robustness. 除了努力解释对抗性例子的工作外,其他人还提出了防御措施,以提高稳健性。
  101. the proposed method achieves slightly lower accuracy on adversarially trained models than BIM
  102. owever, these activities can be viewed as two facets of the same field, and together they have undergone substantial development over the past ten years. 可以被视为同一领域的两个方面
  103. Similar studies between spatial invariance and such common corruptions is an interesting direction of future work.
  104. recent efforts have resulted in equivariant NN models for other transformations such as rotation, flip
  105. Such group-equivariant NN models (GCNNs) are designed to be invariant to a specific group of transformations,
  106. Our approach results in superior performance as well as intepretability
  107. Previous works have broadly investigated the reason for the widespread of such adversarial examples.
  108. we construct a simple, intuitive example in which shift invariance dramatically reduces robustness. 直观样本
  109. deployed machine learning systems 已经部署的机器学习系统
  110. it is imperative\necessary\essential\urgent to ensure that 这是命令\必要\必要\紧急以确保
  111. this observation implies that
  112. Some ascribes the adversary to linear accumulation of perturbations from inputs to final outputs
  113. This phenomenon could be semantically explained as subclasses from the same superclass share more feature similarities. 这种现象可以在语义上解释为来自同一超类的子类共享更多的特征相似性。
  114. The former and the latter are often referred to as the whitebox and the blackbox threat model, respectively.
  115. In particular, the efficacy of the attack is even higher than the existing blackbox attacks and comparable to the existing whitebox attacks.
  116. Further, our method surpasses the state-of-the-art whitebox KED-MI attack on Pubfig83 and achieves a close attack accuracy on the CelebA dataset.
  117. In particular, BREP-MI outperforms GMI and the blackbox attack by a substantial margin for all model architectures.
  118. In the course of our experiments, 在我们的实验过程中
  119. Unless otherwise mentioned 除非另有说明
  120. The rationale behind doing so is to craft adversarial perturbations that are tolerant to slight movements that are natural when physically wearing the frames. 这样做的理由是
  121. our user-study participants deemed as distinguishable from real eyeglasses’ patterns
  122. The phenomenon holds generally across different setups.
  123. The WIDER FACE dataset contains 32,203 images and 393,703 annotated face bounding boxes with high degree of variability in scales, poses, occlusions, expression, makeup, and illumination.
  124. We randomly initialize the patch, and set α = 0.5,unless otherwise specified.
  125. We observe a similar phenomenon for all these setups, as shown in Fig. 5.
  126. the Patch-IoU can cause the recall to decrease dramatically across different initialization, leading to a successful attack.
  127. In this experiment, we test 9 different angles ranging from -60◦ to 60◦ by rotating the turntable with a 15-degree increment at a time to take photos. To accurately capture the angle, we print one image at a time (instead of 6 images per paper). For this experiment, we only compare our D2Pp method with the best performing baseline, the EOT method (ε = 30).
  128. Intuitively, different channels have high responses to different features, and a larger Mean Magnitude of a Channel (MMC) implies that the model puts more emphasis on this channel, hoping that this channel is more important for the task at hand.
  129. between the original MMC of the training set (mini-train) and each test set, to see how much neural networks change channel emphasis when faced with novel tasks
  130. In this section, we aim to show how its formulation makes SIA effective against IDs as well
  131. All our tabulated results are shown for one iteration, alpha= 250 and = 10-3, unless otherwise stated.
  132. We believe this accuracy improvement is due to pruning finding the right capacity of the network and hence reducing overfitting
  133. making it imperative to evaluate viewpoint robustness 评估视觉鲁棒性很有必要
  134. We strive for completeness, but given the high activity on this topic, with new papers constantly coming out, there will be further improvements in attacks and defenses that are not covered here. Nonetheless, the most representative attacks and defenses, which reveal how broad this research field is and how distinct the proposed solutions are, can be found in this article.(综述写作套话)
  135. On one side, adversarial examples pose potential security threats by attacking or misleading the practical deep learning applications like auto driving and face recognition system, which may cause pecuniary loss or people death with severe impairment. On the other side, adversarial examples are also valuable and beneficial to the deep learning models, as they are able to provide insights into their strengths, weaknesses, and blind-spots.
  136. Network-based techniques have achieved satisfying performance owning to their great power for generating high-quality synthetic data.
  137. The experimental results show that our PS-GAN can not only consistently outperforms state-of-the-art adversarial patch attack methods, but also owns strong generalization ability and transferability.
  138. we propose an optimization algorithm that uses minimum perturbation and adversarial confidence thresholds to alternate between the minimization of adversarial loss and stealthiness loss.
  139. We conduct ablations on these techniques to see how each component contributes.
  140. For example, considering the DTD dataset, the best baseline FFF one achieves an attack success rate of 48.52%, while the ASR of the villain L4Abase is up to 50.69%, and the fusing loss further boosts the performance to 50.74%.
  141. we found that alternately updating δ on A and B yields a better result
  142. The characteristic of using non-overlapping patches in ViTs reduces the influence of adversarial examples with the same noise magnitude on the overall results
  143. In the proposed Adv-Attribute, for the first time, we design a flexible multi-objective optimization paradigm to better balance the trade-off between stealthiness and attacking strength
  144. In addition to the face recognition, our method is also applicable to other scenarios, such as the traffic sign recognition task.
  145. On the commercial API, performance drops slightly compared to the open-source model, but remains at an acceptable level.
  146. will not arouse people’s suspicion
  147. we propose a Basin Hopping Evolution (BHE) algorithm to find the appropriate transparency of the watermark image and the appropriate position within the host image to embed watermark.
  148. I-FGSM constructs an adversarial example by multi-step and smaller movements, which greatly improves the success rate of the attack.
  149. integrated seamlessly into existing methods for a broad class of machine learning tasks
  150. In summary, the current performance of transfer attacks is still unsatisfactory, especially for targeted attacks.
  151. Thus it is imperative to develop more robust 3D recognition models, which we leave to future work..
  152. To address this issue, our key insight is to consider the worst-case transformations rather than their expectation, since if the adversarial examples are resistant to the most harmful physical transformations, they can also resist much weaker transformations,
  153. method quantitatively and qualitatively outperforms state-of-the-art view synthesis methods,
  154. Furthermore, this strategy requires a template mesh with fixed topology to be provided as an initialization before optimization [22], which is typically unavailable for unconstrained real-world scenes.
  155. We circumvent this problem by instead encoding a continuous volume within the parameters of a deep fully-connected neural network, which not only produces signifi cantly higher quality renderings than prior volumetric approaches, but also requires just a fraction of the storage cost of those sampled volumetric representations.
  156. Since the success of deep learning mainly takes roots in its ability to extract features that can be used to classify images well, outputs of the layers of a well-trained deep neural network have already contained semantic meaning.
  157. In all experiments, we use the following training parameters, unless mentioned otherwise.
  158. Different body parts such legs, head and muscles are stylized appropriately in accordance with their semantic role, and these styles are blended seamlessly across the surface to form a cohesive texture.
  159. computationally intensive (expensive) and time consuming (and costly)
  160. watermarking is frequently used to prove image ownership as a form of copyright protection. As such, watermarking methods prioritize robustness over secrecy: messages should be recoverable even after the encoded image is modified or distorted.
  161. In this paper, we describe results of an in-depth analysis of the threat posed to both machines and humans by deep-learning speech synthesis attacks
  162. we sweep over three learning rates ∈{0.1,0.01,0.001}

词汇

  1. image-agnostic 图像不可知 ==> image-dependent
  2. proportional adj. 比例的, 成比例的 the performance of the resulting perturbation is proportional to the available training data.
  3. adulterate vt. (尤指食物)掺假, <废>奸污,诱奸 ,adj. 掺杂的,掺假的;不纯的 ,通奸的,犯通奸罪的 ,n. 掺假
  4. susceptible adj. 易受影响的; 易动感情的, 过敏的; 易受…感染的, 能经受的
  5. image-specific 指定图像的
  6. image-agnostic 图像不可知论 (universal)
  7. obviate vt. 避免,消除(贫困、不方便等)
  8. eliminate vt. 消除, 排除,忽略,淘汰〈口〉干掉
  9. in lieu of n. 代替, (以…)替代
  10. aggregating 逐个增加
  11. conj. 即使;虽然
  12. in the vicinity of 在附近
  13. precludes vt. 妨碍;排除;阻止
  14. spurious adj. 假的;伪造的;欺骗的
  15. adopts vt. 收养 ,采用, 采纳, 采取, 正式接受, 通过
  16. Thereupon 于是
  17. derive vt. & vi. 得到, 源于
  18. encompasses vt. 围绕;包围
  19. curvature n. 弯曲 弯曲部分 曲率,曲度
  20. vice-versa 反之亦然
  21. surrogate n. 替代;代理
  22. distinct adj. 截然不同的, 完全分开的 清晰的, 明白的, 明显的 <==> different
  23. shed light on 阐明;使…清楚地显出 Through our formalization, we shed light on the most important factors for transferability.
  24. Without loss of generality, 不失一般性地
  25. integrity and availability 完整性和可用性
  26. In a nutshell 简而言之
  27. posit vt. 假定,设想,假设
  28. corroborate vt. 证实,支持(某种说法、信仰、理论等) n. 确证者;确证物
  29. unmodified adj. 未更改的
  30. input-agnostic 输入不可知
  31. image-agnostic 图像不可知
  32. adulterate vt. (尤指食物)掺假 <废>奸污,诱奸 adj. 掺杂的,掺假的;不纯的 通奸的,犯通奸罪的 n. 掺假
  33. To the best of our knowledge
  34. in conjunction with 连同,共同, 与…协力
  35. uncover 揭露
  36. be made up of 由什么组成
  37. justified explanation 合理的解释
  38. alternative explanation 替代解释
  39. counter-intuitive 违法直觉的
  40. If this would be the case 如果是这样的话
  41. quasi-imperceptible 不可感知
  42. discrepancy n. 差异,不符合(之处);不一致(之处)
  43. Vanilla model 原始模型(raw)模型
  44. a large proportion of 很大一部分
  45. contingent adj. 偶然(发生)的;(损失、责任等)附带的;以事实为依据的;依情况而变的 n. (代表某一组织或国家的)代表团;(军队的)分遣队
  46. collaterally n. 附属担保品 adj. 相关的
  47. a spectrum of 连续的, 连串的
  48. despite its structural simplicity 尽管结构简单
  49. on par with 与…相当
  50. the former 前者
  51. In the real world, attacks are continuously evolving. 在现实世界中,攻击在不断发展。
  52. a toy experiment 玩具实验
  53. denominator 分母
  54. numerator 分子
  55. in an end-to-end manner 以端到端的方式
  56. conjecture n. 推测;猜想 vi. 推测;揣摩 vt. 推测
  57. curvature n. 弯曲,弯曲部分,曲率,曲度
  58. derivative n. 派生物,引出物 adj. 模仿他人的;衍生的;派生的
  59. enormous adj. 巨大的, 极大的, 庞大的
  60. subsumed vt. 把……归入;把……包括在内
  61. gives rise to 引起、
  62. transductive learning 转导学习
  63. inductive learning 归纳学习
  64. rigorous adj. 严密的;缜密的,严格的,严厉的
  65. sanitize vt. 使…无害;给…消毒;对…采取卫生措施
  66. Transiency 顷刻,无常
  67. irrevocable adj. 不可改变的, 不可反转的
  68. elucidates vt. 阐明,解释
  69. forego vt. (在位置时间或程度方面)走在…之前,居先
  70. Chiefly adv. 主要地;首先
  71. impotent adj. 无力的;无效的;虚弱的;阳萎的
  72. Accordingly, adv. 照着, 相应地 因此, 所以, 于是
  73. off-the-shelf object detectors 表示现有的目标检测器
  74. substantial adj. 坚固的;结实的,大量的,可观的,重大的,重要的,实质的,基本的,大体上的
  75. this counter intuitive behavior 这种反直觉行为
  76. Even more importantly 更重要的是
  77. In the same year
  78. insensitive adj. 感觉迟钝的;不友好的,感觉不到的;麻木不仁的
  79. inspect vt. 检查;视察;检阅 vi. 进行检查;进行视察
  80. degrades vt. 使……丢脸;使……降级;使……降解;贬低 vi. 退化;降级,降低
  81. in practice 实际上,事实上;在实践中
  82. at present 目前,当前
  83. solve/address 解决问题
  84. indicates,represents,denotes
  85. imperative /necessary adj. 必要的, 紧急的, 极重要的;命令的;祈使的 n. 必要的事, 必须完成的事;祈使语气
  86. ascribes 把…归于 sb ascribes sth to
  87. by leveraging 利用
  88. ad-hoc Ad hoc 是一个拉丁文常用短语。 这个短语的意思是“特设的、特定目的的(地)、即席的、临时的、将就的、专案的”。 这个短语通常用来形容一些特殊的、不能用于其它方面的,为一个特定的问题、任务而专门设定的解决方案。
  89. inconspicuousness 不显眼
  90. the color gamut of our printer
  91. on-the-fly, in practical
  92. highlighting 强调
  93. with respect to (w.r.t.)
  94. Intuitively adv. 直觉地;直观地; 由直觉而得地
  95. the channels of features
  96. which is relatively 6 times smaller.
  97. indeed effective 确实有效果
  98. our intuition 我们的直觉
  99. effectiveness and efficiency 有效性和效率
  100. precisely 精确的
  101. ongoing 继续 (so this line of research is still ongoing)
  102. In an analogous manner (在以类似的方式)
  103. literature 1.文学, 文学作品 2.文献, 图书资料
  104. The initial experimental results showed that the minimum average distortion is very low and can not be distinguished by human observers.
  105. To sum up, anyhow, anyway, in a word, in conclusion, on all accounts, to sum up
  106. jeopardize vt. 危及, 损害
  107. qualitatively and quantitatively 定性和定量
  108. corroborate
  109. intra-architecture and inter-architecture cases
  110. tampering
  111. the feasibility and generalizability of PAR
  112. physically-realizable forms
  113. easy to perceive
  114. ubiquitous
  115. is tailored to 等价于 be designed to
  116. a recent line of work
  117. open interval 开集
  118. engender/yield/produce 产生
  119. facilitates/utilize/use
  120. In light of this observation,
  121. is conducive to
  122. harnessing 利用 exploit/utilize/use/leverages/make use of
  123. postulate/speculate/conjecture 推测
  124. exemplar n. 模型, 模范 原型, 典型
  125. inductive bias 归纳偏置
  126. synergy n. 协同作用;协同;增效
  127. In a nutshell 简短地说
  128. A and B are comparative A与B相当
  129. the premise of …的前提
  130. a granularity of 粒度为(例句range the internal update frequency M from 0 to 32 with a granularity of 4.)
  131. preserve 保留
  132. dubbed 称为
  133. alleviates 消除
  134. scarcity n. 缺乏;不足 / de ficiencies 不足之处, of prior work
  135. progressively 渐进的
  136. A surge of
  137. the penultimate layer 中间层
  138. reminder/following 随后的,后续的
  139. vanilla 初始的
  140. sumptuously 极其,非常
  141. attenuations 衰减
  142. surreptitiously 秘密地
  143. delineates 描绘,描述
  144. amounts to 相当于 Eq. 4 amounts to solve a min-max optimization problem
  145. holistic adj. 全部的
  146. sever vt. 切断;断绝 vi. 断,裂
  147. inductive adj. 诱导的,感应的
  148. the tendency of 趋势
  149. predictive adj. 预言性的,成为前兆的
  150. pitfall
  151. endow sth with the ability to
  152. prosepective employers 预期雇主
Table of Contents