This is the 4.14.164 stable release

-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAl4a/2MACgkQONu9yGCS
 aT4BwA//diCficMfLINrc/9bMq3VS2Y+/lnuURMXEM9MJibjQCUS1spc6YhhNFrE
 8m3aavAYywjjD3zGHj8KEaKQFDrPQxYQDzPOPK9rxjpxlUFpnYWUGlI2krpwBV6c
 8xAekM62sMEIq09EHqqhKVls+WmYi47/pdfGAAt3PUR8c2eTOlxiFsiwq4nuZDdv
 rcMkQm87V8Wn1Nq+Dfp6R3U+X9f4DcU5n5cKiGq6ujoalT7h5/jj36JIFxBwMapF
 WjpqXMUUeylXxXnNFMUbEMg+lEqJlWfvj1sxdxyMdgS+L9rc9bXk/NTub4TZPaXu
 odwMl9RKWjJvFsvn26Pc4s31K2raEhCDYdkVoFTXWsc7vbE4A/h/yAw4Wq+cuBI4
 H4fBXYYZ3D0Il9kxYYbfSaki5z1YbI54tkWcrs8f8jli5C0M3Wkkux1TA4HPj2Ja
 8zJFH0++cyfpuKRiYXro+H2Tq4KxBwsWEtync8230MEywlTxkz4IIue+SCgVV+WD
 jmg/enRjbnkpYBSH1pKOdAAga0kHSxtwWlfLFrjhcgGse8y6sCJhUOPPcQMnf/k0
 Jrmc3InHg+mtLiSsJXAp4iGABJlW+W/ouaxaxYoA9wucwQlcgxXpkigl5rOgFTma
 153RYc1TSZJAe+cjx42qZxRxcD8/Vg5d6D2tL1otbMSIsD3e7Gk=
 =sq63
 -----END PGP SIGNATURE-----

Merge tag 'v4.14.164' into 4.14-2.3.x-imx

This is the 4.14.164 stable release

Conflicts:
	arch/arm/Kconfig.debug
	arch/arm/boot/dts/imx7s.dtsi
	arch/arm/mach-imx/cpuidle-imx6q.c
	arch/arm/mach-imx/cpuidle-imx6sx.c
	arch/arm64/kernel/cpu_errata.c
	arch/arm64/kvm/hyp/tlb.c
	drivers/crypto/caam/caamalg.c
	drivers/crypto/mxs-dcp.c
	drivers/dma/imx-sdma.c
	drivers/gpio/gpio-vf610.c
	drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
	drivers/input/keyboard/imx_keypad.c
	drivers/input/keyboard/snvs_pwrkey.c
	drivers/mmc/core/block.c
	drivers/mmc/core/queue.h
	drivers/mmc/host/sdhci-esdhc-imx.c
	drivers/net/can/flexcan.c
	drivers/net/can/rx-offload.c
	drivers/net/ethernet/freescale/fec_main.c
	drivers/net/wireless/ath/ath10k/pci.c
	drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
	drivers/pci/dwc/pci-imx6.c
	drivers/spi/spi-fsl-lpspi.c
	drivers/usb/dwc3/gadget.c
	include/net/tcp.h
	sound/soc/fsl/Kconfig
	sound/soc/fsl/fsl_esai.c
This commit is contained in:
Marcel Ziswiler 2020-02-04 11:31:11 +01:00
commit 500f9a04cd
4280 changed files with 54655 additions and 23252 deletions

View File

@ -29,7 +29,7 @@ Contact: Bjørn Mork <bjorn@mork.no>
Description:
Unsigned integer.
Write a number ranging from 1 to 127 to add a qmap mux
Write a number ranging from 1 to 254 to add a qmap mux
based network device, supported by recent Qualcomm based
modems.
@ -46,5 +46,5 @@ Contact: Bjørn Mork <bjorn@mork.no>
Description:
Unsigned integer.
Write a number ranging from 1 to 127 to delete a previously
Write a number ranging from 1 to 254 to delete a previously
created qmap mux based network device.

View File

@ -380,6 +380,9 @@ What: /sys/devices/system/cpu/vulnerabilities
/sys/devices/system/cpu/vulnerabilities/spectre_v2
/sys/devices/system/cpu/vulnerabilities/spec_store_bypass
/sys/devices/system/cpu/vulnerabilities/l1tf
/sys/devices/system/cpu/vulnerabilities/mds
/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
/sys/devices/system/cpu/vulnerabilities/itlb_multihit
Date: January 2018
Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
Description: Information about CPU vulnerabilities
@ -392,8 +395,7 @@ Description: Information about CPU vulnerabilities
"Vulnerable" CPU is affected and no mitigation in effect
"Mitigation: $M" CPU is affected and mitigation $M is in effect
Details about the l1tf file can be found in
Documentation/admin-guide/l1tf.rst
See also: Documentation/admin-guide/hw-vuln/index.rst
What: /sys/devices/system/cpu/smt
/sys/devices/system/cpu/smt/active

View File

@ -0,0 +1,16 @@
========================
Hardware vulnerabilities
========================
This section describes CPU vulnerabilities and provides an overview of the
possible mitigations along with guidance for selecting mitigations if they
are configurable at compile, boot or run time.
.. toctree::
:maxdepth: 1
spectre
l1tf
mds
tsx_async_abort
multihit.rst

View File

@ -445,6 +445,7 @@ The default is 'cond'. If 'l1tf=full,force' is given on the kernel command
line, then 'always' is enforced and the kvm-intel.vmentry_l1d_flush
module parameter is ignored and writes to the sysfs file are rejected.
.. _mitigation_selection:
Mitigation selection guide
--------------------------
@ -556,7 +557,7 @@ When nested virtualization is in use, three operating systems are involved:
the bare metal hypervisor, the nested hypervisor and the nested virtual
machine. VMENTER operations from the nested hypervisor into the nested
guest will always be processed by the bare metal hypervisor. If KVM is the
bare metal hypervisor it wiil:
bare metal hypervisor it will:
- Flush the L1D cache on every switch from the nested hypervisor to the
nested virtual machine, so that the nested hypervisor's secrets are not

View File

@ -0,0 +1,311 @@
MDS - Microarchitectural Data Sampling
======================================
Microarchitectural Data Sampling is a hardware vulnerability which allows
unprivileged speculative access to data which is available in various CPU
internal buffers.
Affected processors
-------------------
This vulnerability affects a wide range of Intel processors. The
vulnerability is not present on:
- Processors from AMD, Centaur and other non Intel vendors
- Older processor models, where the CPU family is < 6
- Some Atoms (Bonnell, Saltwell, Goldmont, GoldmontPlus)
- Intel processors which have the ARCH_CAP_MDS_NO bit set in the
IA32_ARCH_CAPABILITIES MSR.
Whether a processor is affected or not can be read out from the MDS
vulnerability file in sysfs. See :ref:`mds_sys_info`.
Not all processors are affected by all variants of MDS, but the mitigation
is identical for all of them so the kernel treats them as a single
vulnerability.
Related CVEs
------------
The following CVE entries are related to the MDS vulnerability:
============== ===== ===================================================
CVE-2018-12126 MSBDS Microarchitectural Store Buffer Data Sampling
CVE-2018-12130 MFBDS Microarchitectural Fill Buffer Data Sampling
CVE-2018-12127 MLPDS Microarchitectural Load Port Data Sampling
CVE-2019-11091 MDSUM Microarchitectural Data Sampling Uncacheable Memory
============== ===== ===================================================
Problem
-------
When performing store, load, L1 refill operations, processors write data
into temporary microarchitectural structures (buffers). The data in the
buffer can be forwarded to load operations as an optimization.
Under certain conditions, usually a fault/assist caused by a load
operation, data unrelated to the load memory address can be speculatively
forwarded from the buffers. Because the load operation causes a fault or
assist and its result will be discarded, the forwarded data will not cause
incorrect program execution or state changes. But a malicious operation
may be able to forward this speculative data to a disclosure gadget which
allows in turn to infer the value via a cache side channel attack.
Because the buffers are potentially shared between Hyper-Threads cross
Hyper-Thread attacks are possible.
Deeper technical information is available in the MDS specific x86
architecture section: :ref:`Documentation/x86/mds.rst <mds>`.
Attack scenarios
----------------
Attacks against the MDS vulnerabilities can be mounted from malicious non
priviledged user space applications running on hosts or guest. Malicious
guest OSes can obviously mount attacks as well.
Contrary to other speculation based vulnerabilities the MDS vulnerability
does not allow the attacker to control the memory target address. As a
consequence the attacks are purely sampling based, but as demonstrated with
the TLBleed attack samples can be postprocessed successfully.
Web-Browsers
^^^^^^^^^^^^
It's unclear whether attacks through Web-Browsers are possible at
all. The exploitation through Java-Script is considered very unlikely,
but other widely used web technologies like Webassembly could possibly be
abused.
.. _mds_sys_info:
MDS system information
-----------------------
The Linux kernel provides a sysfs interface to enumerate the current MDS
status of the system: whether the system is vulnerable, and which
mitigations are active. The relevant sysfs file is:
/sys/devices/system/cpu/vulnerabilities/mds
The possible values in this file are:
.. list-table::
* - 'Not affected'
- The processor is not vulnerable
* - 'Vulnerable'
- The processor is vulnerable, but no mitigation enabled
* - 'Vulnerable: Clear CPU buffers attempted, no microcode'
- The processor is vulnerable but microcode is not updated.
The mitigation is enabled on a best effort basis. See :ref:`vmwerv`
* - 'Mitigation: Clear CPU buffers'
- The processor is vulnerable and the CPU buffer clearing mitigation is
enabled.
If the processor is vulnerable then the following information is appended
to the above information:
======================== ============================================
'SMT vulnerable' SMT is enabled
'SMT mitigated' SMT is enabled and mitigated
'SMT disabled' SMT is disabled
'SMT Host state unknown' Kernel runs in a VM, Host SMT state unknown
======================== ============================================
.. _vmwerv:
Best effort mitigation mode
^^^^^^^^^^^^^^^^^^^^^^^^^^^
If the processor is vulnerable, but the availability of the microcode based
mitigation mechanism is not advertised via CPUID the kernel selects a best
effort mitigation mode. This mode invokes the mitigation instructions
without a guarantee that they clear the CPU buffers.
This is done to address virtualization scenarios where the host has the
microcode update applied, but the hypervisor is not yet updated to expose
the CPUID to the guest. If the host has updated microcode the protection
takes effect otherwise a few cpu cycles are wasted pointlessly.
The state in the mds sysfs file reflects this situation accordingly.
Mitigation mechanism
-------------------------
The kernel detects the affected CPUs and the presence of the microcode
which is required.
If a CPU is affected and the microcode is available, then the kernel
enables the mitigation by default. The mitigation can be controlled at boot
time via a kernel command line option. See
:ref:`mds_mitigation_control_command_line`.
.. _cpu_buffer_clear:
CPU buffer clearing
^^^^^^^^^^^^^^^^^^^
The mitigation for MDS clears the affected CPU buffers on return to user
space and when entering a guest.
If SMT is enabled it also clears the buffers on idle entry when the CPU
is only affected by MSBDS and not any other MDS variant, because the
other variants cannot be protected against cross Hyper-Thread attacks.
For CPUs which are only affected by MSBDS the user space, guest and idle
transition mitigations are sufficient and SMT is not affected.
.. _virt_mechanism:
Virtualization mitigation
^^^^^^^^^^^^^^^^^^^^^^^^^
The protection for host to guest transition depends on the L1TF
vulnerability of the CPU:
- CPU is affected by L1TF:
If the L1D flush mitigation is enabled and up to date microcode is
available, the L1D flush mitigation is automatically protecting the
guest transition.
If the L1D flush mitigation is disabled then the MDS mitigation is
invoked explicit when the host MDS mitigation is enabled.
For details on L1TF and virtualization see:
:ref:`Documentation/admin-guide/hw-vuln//l1tf.rst <mitigation_control_kvm>`.
- CPU is not affected by L1TF:
CPU buffers are flushed before entering the guest when the host MDS
mitigation is enabled.
The resulting MDS protection matrix for the host to guest transition:
============ ===== ============= ============ =================
L1TF MDS VMX-L1FLUSH Host MDS MDS-State
Don't care No Don't care N/A Not affected
Yes Yes Disabled Off Vulnerable
Yes Yes Disabled Full Mitigated
Yes Yes Enabled Don't care Mitigated
No Yes N/A Off Vulnerable
No Yes N/A Full Mitigated
============ ===== ============= ============ =================
This only covers the host to guest transition, i.e. prevents leakage from
host to guest, but does not protect the guest internally. Guests need to
have their own protections.
.. _xeon_phi:
XEON PHI specific considerations
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The XEON PHI processor family is affected by MSBDS which can be exploited
cross Hyper-Threads when entering idle states. Some XEON PHI variants allow
to use MWAIT in user space (Ring 3) which opens an potential attack vector
for malicious user space. The exposure can be disabled on the kernel
command line with the 'ring3mwait=disable' command line option.
XEON PHI is not affected by the other MDS variants and MSBDS is mitigated
before the CPU enters a idle state. As XEON PHI is not affected by L1TF
either disabling SMT is not required for full protection.
.. _mds_smt_control:
SMT control
^^^^^^^^^^^
All MDS variants except MSBDS can be attacked cross Hyper-Threads. That
means on CPUs which are affected by MFBDS or MLPDS it is necessary to
disable SMT for full protection. These are most of the affected CPUs; the
exception is XEON PHI, see :ref:`xeon_phi`.
Disabling SMT can have a significant performance impact, but the impact
depends on the type of workloads.
See the relevant chapter in the L1TF mitigation documentation for details:
:ref:`Documentation/admin-guide/hw-vuln/l1tf.rst <smt_control>`.
.. _mds_mitigation_control_command_line:
Mitigation control on the kernel command line
---------------------------------------------
The kernel command line allows to control the MDS mitigations at boot
time with the option "mds=". The valid arguments for this option are:
============ =============================================================
full If the CPU is vulnerable, enable all available mitigations
for the MDS vulnerability, CPU buffer clearing on exit to
userspace and when entering a VM. Idle transitions are
protected as well if SMT is enabled.
It does not automatically disable SMT.
full,nosmt The same as mds=full, with SMT disabled on vulnerable
CPUs. This is the complete mitigation.
off Disables MDS mitigations completely.
============ =============================================================
Not specifying this option is equivalent to "mds=full". For processors
that are affected by both TAA (TSX Asynchronous Abort) and MDS,
specifying just "mds=off" without an accompanying "tsx_async_abort=off"
will have no effect as the same mitigation is used for both
vulnerabilities.
Mitigation selection guide
--------------------------
1. Trusted userspace
^^^^^^^^^^^^^^^^^^^^
If all userspace applications are from a trusted source and do not
execute untrusted code which is supplied externally, then the mitigation
can be disabled.
2. Virtualization with trusted guests
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The same considerations as above versus trusted user space apply.
3. Virtualization with untrusted guests
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The protection depends on the state of the L1TF mitigations.
See :ref:`virt_mechanism`.
If the MDS mitigation is enabled and SMT is disabled, guest to host and
guest to guest attacks are prevented.
.. _mds_default_mitigations:
Default mitigations
-------------------
The kernel default mitigations for vulnerable processors are:
- Enable CPU buffer clearing
The kernel does not by default enforce the disabling of SMT, which leaves
SMT systems vulnerable when running untrusted code. The same rationale as
for L1TF applies.
See :ref:`Documentation/admin-guide/hw-vuln//l1tf.rst <default_mitigations>`.

View File

@ -0,0 +1,163 @@
iTLB multihit
=============
iTLB multihit is an erratum where some processors may incur a machine check
error, possibly resulting in an unrecoverable CPU lockup, when an
instruction fetch hits multiple entries in the instruction TLB. This can
occur when the page size is changed along with either the physical address
or cache type. A malicious guest running on a virtualized system can
exploit this erratum to perform a denial of service attack.
Affected processors
-------------------
Variations of this erratum are present on most Intel Core and Xeon processor
models. The erratum is not present on:
- non-Intel processors
- Some Atoms (Airmont, Bonnell, Goldmont, GoldmontPlus, Saltwell, Silvermont)
- Intel processors that have the PSCHANGE_MC_NO bit set in the
IA32_ARCH_CAPABILITIES MSR.
Related CVEs
------------
The following CVE entry is related to this issue:
============== =================================================
CVE-2018-12207 Machine Check Error Avoidance on Page Size Change
============== =================================================
Problem
-------
Privileged software, including OS and virtual machine managers (VMM), are in
charge of memory management. A key component in memory management is the control
of the page tables. Modern processors use virtual memory, a technique that creates
the illusion of a very large memory for processors. This virtual space is split
into pages of a given size. Page tables translate virtual addresses to physical
addresses.
To reduce latency when performing a virtual to physical address translation,
processors include a structure, called TLB, that caches recent translations.
There are separate TLBs for instruction (iTLB) and data (dTLB).
Under this errata, instructions are fetched from a linear address translated
using a 4 KB translation cached in the iTLB. Privileged software modifies the
paging structure so that the same linear address using large page size (2 MB, 4
MB, 1 GB) with a different physical address or memory type. After the page
structure modification but before the software invalidates any iTLB entries for
the linear address, a code fetch that happens on the same linear address may
cause a machine-check error which can result in a system hang or shutdown.
Attack scenarios
----------------
Attacks against the iTLB multihit erratum can be mounted from malicious
guests in a virtualized system.
iTLB multihit system information
--------------------------------
The Linux kernel provides a sysfs interface to enumerate the current iTLB
multihit status of the system:whether the system is vulnerable and which
mitigations are active. The relevant sysfs file is:
/sys/devices/system/cpu/vulnerabilities/itlb_multihit
The possible values in this file are:
.. list-table::
* - Not affected
- The processor is not vulnerable.
* - KVM: Mitigation: Split huge pages
- Software changes mitigate this issue.
* - KVM: Vulnerable
- The processor is vulnerable, but no mitigation enabled
Enumeration of the erratum
--------------------------------
A new bit has been allocated in the IA32_ARCH_CAPABILITIES (PSCHANGE_MC_NO) msr
and will be set on CPU's which are mitigated against this issue.
======================================= =========== ===============================
IA32_ARCH_CAPABILITIES MSR Not present Possibly vulnerable,check model
IA32_ARCH_CAPABILITIES[PSCHANGE_MC_NO] '0' Likely vulnerable,check model
IA32_ARCH_CAPABILITIES[PSCHANGE_MC_NO] '1' Not vulnerable
======================================= =========== ===============================
Mitigation mechanism
-------------------------
This erratum can be mitigated by restricting the use of large page sizes to
non-executable pages. This forces all iTLB entries to be 4K, and removes
the possibility of multiple hits.
In order to mitigate the vulnerability, KVM initially marks all huge pages
as non-executable. If the guest attempts to execute in one of those pages,
the page is broken down into 4K pages, which are then marked executable.
If EPT is disabled or not available on the host, KVM is in control of TLB
flushes and the problematic situation cannot happen. However, the shadow
EPT paging mechanism used by nested virtualization is vulnerable, because
the nested guest can trigger multiple iTLB hits by modifying its own
(non-nested) page tables. For simplicity, KVM will make large pages
non-executable in all shadow paging modes.
Mitigation control on the kernel command line and KVM - module parameter
------------------------------------------------------------------------
The KVM hypervisor mitigation mechanism for marking huge pages as
non-executable can be controlled with a module parameter "nx_huge_pages=".
The kernel command line allows to control the iTLB multihit mitigations at
boot time with the option "kvm.nx_huge_pages=".
The valid arguments for these options are:
========== ================================================================
force Mitigation is enabled. In this case, the mitigation implements
non-executable huge pages in Linux kernel KVM module. All huge
pages in the EPT are marked as non-executable.
If a guest attempts to execute in one of those pages, the page is
broken down into 4K pages, which are then marked executable.
off Mitigation is disabled.
auto Enable mitigation only if the platform is affected and the kernel
was not booted with the "mitigations=off" command line parameter.
This is the default option.
========== ================================================================
Mitigation selection guide
--------------------------
1. No virtualization in use
^^^^^^^^^^^^^^^^^^^^^^^^^^^
The system is protected by the kernel unconditionally and no further
action is required.
2. Virtualization with trusted guests
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If the guest comes from a trusted source, you may assume that the guest will
not attempt to maliciously exploit these errata and no further action is
required.
3. Virtualization with untrusted guests
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If the guest comes from an untrusted source, the guest host kernel will need
to apply iTLB multihit mitigation via the kernel command line or kvm
module parameter.

View File

@ -0,0 +1,769 @@
.. SPDX-License-Identifier: GPL-2.0
Spectre Side Channels
=====================
Spectre is a class of side channel attacks that exploit branch prediction
and speculative execution on modern CPUs to read memory, possibly
bypassing access controls. Speculative execution side channel exploits
do not modify memory but attempt to infer privileged data in the memory.
This document covers Spectre variant 1 and Spectre variant 2.
Affected processors
-------------------
Speculative execution side channel methods affect a wide range of modern
high performance processors, since most modern high speed processors
use branch prediction and speculative execution.
The following CPUs are vulnerable:
- Intel Core, Atom, Pentium, and Xeon processors
- AMD Phenom, EPYC, and Zen processors
- IBM POWER and zSeries processors
- Higher end ARM processors
- Apple CPUs
- Higher end MIPS CPUs
- Likely most other high performance CPUs. Contact your CPU vendor for details.
Whether a processor is affected or not can be read out from the Spectre
vulnerability files in sysfs. See :ref:`spectre_sys_info`.
Related CVEs
------------
The following CVE entries describe Spectre variants:
============= ======================= ==========================
CVE-2017-5753 Bounds check bypass Spectre variant 1
CVE-2017-5715 Branch target injection Spectre variant 2
CVE-2019-1125 Spectre v1 swapgs Spectre variant 1 (swapgs)
============= ======================= ==========================
Problem
-------
CPUs use speculative operations to improve performance. That may leave
traces of memory accesses or computations in the processor's caches,
buffers, and branch predictors. Malicious software may be able to
influence the speculative execution paths, and then use the side effects
of the speculative execution in the CPUs' caches and buffers to infer
privileged data touched during the speculative execution.
Spectre variant 1 attacks take advantage of speculative execution of
conditional branches, while Spectre variant 2 attacks use speculative
execution of indirect branches to leak privileged memory.
See :ref:`[1] <spec_ref1>` :ref:`[5] <spec_ref5>` :ref:`[7] <spec_ref7>`
:ref:`[10] <spec_ref10>` :ref:`[11] <spec_ref11>`.
Spectre variant 1 (Bounds Check Bypass)
---------------------------------------
The bounds check bypass attack :ref:`[2] <spec_ref2>` takes advantage
of speculative execution that bypasses conditional branch instructions
used for memory access bounds check (e.g. checking if the index of an
array results in memory access within a valid range). This results in
memory accesses to invalid memory (with out-of-bound index) that are
done speculatively before validation checks resolve. Such speculative
memory accesses can leave side effects, creating side channels which
leak information to the attacker.
There are some extensions of Spectre variant 1 attacks for reading data
over the network, see :ref:`[12] <spec_ref12>`. However such attacks
are difficult, low bandwidth, fragile, and are considered low risk.
Note that, despite "Bounds Check Bypass" name, Spectre variant 1 is not
only about user-controlled array bounds checks. It can affect any
conditional checks. The kernel entry code interrupt, exception, and NMI
handlers all have conditional swapgs checks. Those may be problematic
in the context of Spectre v1, as kernel code can speculatively run with
a user GS.
Spectre variant 2 (Branch Target Injection)
-------------------------------------------
The branch target injection attack takes advantage of speculative
execution of indirect branches :ref:`[3] <spec_ref3>`. The indirect
branch predictors inside the processor used to guess the target of
indirect branches can be influenced by an attacker, causing gadget code
to be speculatively executed, thus exposing sensitive data touched by
the victim. The side effects left in the CPU's caches during speculative
execution can be measured to infer data values.
.. _poison_btb:
In Spectre variant 2 attacks, the attacker can steer speculative indirect
branches in the victim to gadget code by poisoning the branch target
buffer of a CPU used for predicting indirect branch addresses. Such
poisoning could be done by indirect branching into existing code,
with the address offset of the indirect branch under the attacker's
control. Since the branch prediction on impacted hardware does not
fully disambiguate branch address and uses the offset for prediction,
this could cause privileged code's indirect branch to jump to a gadget
code with the same offset.
The most useful gadgets take an attacker-controlled input parameter (such
as a register value) so that the memory read can be controlled. Gadgets
without input parameters might be possible, but the attacker would have
very little control over what memory can be read, reducing the risk of
the attack revealing useful data.
One other variant 2 attack vector is for the attacker to poison the
return stack buffer (RSB) :ref:`[13] <spec_ref13>` to cause speculative
subroutine return instruction execution to go to a gadget. An attacker's
imbalanced subroutine call instructions might "poison" entries in the
return stack buffer which are later consumed by a victim's subroutine
return instructions. This attack can be mitigated by flushing the return
stack buffer on context switch, or virtual machine (VM) exit.
On systems with simultaneous multi-threading (SMT), attacks are possible
from the sibling thread, as level 1 cache and branch target buffer
(BTB) may be shared between hardware threads in a CPU core. A malicious
program running on the sibling thread may influence its peer's BTB to
steer its indirect branch speculations to gadget code, and measure the
speculative execution's side effects left in level 1 cache to infer the
victim's data.
Attack scenarios
----------------
The following list of attack scenarios have been anticipated, but may
not cover all possible attack vectors.
1. A user process attacking the kernel
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Spectre variant 1
~~~~~~~~~~~~~~~~~
The attacker passes a parameter to the kernel via a register or
via a known address in memory during a syscall. Such parameter may
be used later by the kernel as an index to an array or to derive
a pointer for a Spectre variant 1 attack. The index or pointer
is invalid, but bound checks are bypassed in the code branch taken
for speculative execution. This could cause privileged memory to be
accessed and leaked.
For kernel code that has been identified where data pointers could
potentially be influenced for Spectre attacks, new "nospec" accessor
macros are used to prevent speculative loading of data.
Spectre variant 1 (swapgs)
~~~~~~~~~~~~~~~~~~~~~~~~~~
An attacker can train the branch predictor to speculatively skip the
swapgs path for an interrupt or exception. If they initialize
the GS register to a user-space value, if the swapgs is speculatively
skipped, subsequent GS-related percpu accesses in the speculation
window will be done with the attacker-controlled GS value. This
could cause privileged memory to be accessed and leaked.
For example:
::
if (coming from user space)
swapgs
mov %gs:<percpu_offset>, %reg
mov (%reg), %reg1
When coming from user space, the CPU can speculatively skip the
swapgs, and then do a speculative percpu load using the user GS
value. So the user can speculatively force a read of any kernel
value. If a gadget exists which uses the percpu value as an address
in another load/store, then the contents of the kernel value may
become visible via an L1 side channel attack.
A similar attack exists when coming from kernel space. The CPU can
speculatively do the swapgs, causing the user GS to get used for the
rest of the speculative window.
Spectre variant 2
~~~~~~~~~~~~~~~~~
A spectre variant 2 attacker can :ref:`poison <poison_btb>` the branch
target buffer (BTB) before issuing syscall to launch an attack.
After entering the kernel, the kernel could use the poisoned branch
target buffer on indirect jump and jump to gadget code in speculative
execution.
If an attacker tries to control the memory addresses leaked during
speculative execution, he would also need to pass a parameter to the
gadget, either through a register or a known address in memory. After
the gadget has executed, he can measure the side effect.
The kernel can protect itself against consuming poisoned branch
target buffer entries by using return trampolines (also known as
"retpoline") :ref:`[3] <spec_ref3>` :ref:`[9] <spec_ref9>` for all
indirect branches. Return trampolines trap speculative execution paths
to prevent jumping to gadget code during speculative execution.
x86 CPUs with Enhanced Indirect Branch Restricted Speculation
(Enhanced IBRS) available in hardware should use the feature to
mitigate Spectre variant 2 instead of retpoline. Enhanced IBRS is
more efficient than retpoline.
There may be gadget code in firmware which could be exploited with
Spectre variant 2 attack by a rogue user process. To mitigate such
attacks on x86, Indirect Branch Restricted Speculation (IBRS) feature
is turned on before the kernel invokes any firmware code.
2. A user process attacking another user process
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A malicious user process can try to attack another user process,
either via a context switch on the same hardware thread, or from the
sibling hyperthread sharing a physical processor core on simultaneous
multi-threading (SMT) system.
Spectre variant 1 attacks generally require passing parameters
between the processes, which needs a data passing relationship, such
as remote procedure calls (RPC). Those parameters are used in gadget
code to derive invalid data pointers accessing privileged memory in
the attacked process.
Spectre variant 2 attacks can be launched from a rogue process by
:ref:`poisoning <poison_btb>` the branch target buffer. This can
influence the indirect branch targets for a victim process that either
runs later on the same hardware thread, or running concurrently on
a sibling hardware thread sharing the same physical core.
A user process can protect itself against Spectre variant 2 attacks
by using the prctl() syscall to disable indirect branch speculation
for itself. An administrator can also cordon off an unsafe process
from polluting the branch target buffer by disabling the process's
indirect branch speculation. This comes with a performance cost
from not using indirect branch speculation and clearing the branch
target buffer. When SMT is enabled on x86, for a process that has
indirect branch speculation disabled, Single Threaded Indirect Branch
Predictors (STIBP) :ref:`[4] <spec_ref4>` are turned on to prevent the
sibling thread from controlling branch target buffer. In addition,
the Indirect Branch Prediction Barrier (IBPB) is issued to clear the
branch target buffer when context switching to and from such process.
On x86, the return stack buffer is stuffed on context switch.
This prevents the branch target buffer from being used for branch
prediction when the return stack buffer underflows while switching to
a deeper call stack. Any poisoned entries in the return stack buffer
left by the previous process will also be cleared.
User programs should use address space randomization to make attacks
more difficult (Set /proc/sys/kernel/randomize_va_space = 1 or 2).
3. A virtualized guest attacking the host
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The attack mechanism is similar to how user processes attack the
kernel. The kernel is entered via hyper-calls or other virtualization
exit paths.
For Spectre variant 1 attacks, rogue guests can pass parameters
(e.g. in registers) via hyper-calls to derive invalid pointers to
speculate into privileged memory after entering the kernel. For places
where such kernel code has been identified, nospec accessor macros
are used to stop speculative memory access.
For Spectre variant 2 attacks, rogue guests can :ref:`poison
<poison_btb>` the branch target buffer or return stack buffer, causing
the kernel to jump to gadget code in the speculative execution paths.
To mitigate variant 2, the host kernel can use return trampolines
for indirect branches to bypass the poisoned branch target buffer,
and flushing the return stack buffer on VM exit. This prevents rogue
guests from affecting indirect branching in the host kernel.
To protect host processes from rogue guests, host processes can have
indirect branch speculation disabled via prctl(). The branch target
buffer is cleared before context switching to such processes.
4. A virtualized guest attacking other guest
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A rogue guest may attack another guest to get data accessible by the
other guest.
Spectre variant 1 attacks are possible if parameters can be passed
between guests. This may be done via mechanisms such as shared memory
or message passing. Such parameters could be used to derive data
pointers to privileged data in guest. The privileged data could be
accessed by gadget code in the victim's speculation paths.
Spectre variant 2 attacks can be launched from a rogue guest by
:ref:`poisoning <poison_btb>` the branch target buffer or the return
stack buffer. Such poisoned entries could be used to influence
speculation execution paths in the victim guest.
Linux kernel mitigates attacks to other guests running in the same
CPU hardware thread by flushing the return stack buffer on VM exit,
and clearing the branch target buffer before switching to a new guest.
If SMT is used, Spectre variant 2 attacks from an untrusted guest
in the sibling hyperthread can be mitigated by the administrator,
by turning off the unsafe guest's indirect branch speculation via
prctl(). A guest can also protect itself by turning on microcode
based mitigations (such as IBPB or STIBP on x86) within the guest.
.. _spectre_sys_info:
Spectre system information
--------------------------
The Linux kernel provides a sysfs interface to enumerate the current
mitigation status of the system for Spectre: whether the system is
vulnerable, and which mitigations are active.
The sysfs file showing Spectre variant 1 mitigation status is:
/sys/devices/system/cpu/vulnerabilities/spectre_v1
The possible values in this file are:
.. list-table::
* - 'Not affected'
- The processor is not vulnerable.
* - 'Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers'
- The swapgs protections are disabled; otherwise it has
protection in the kernel on a case by case base with explicit
pointer sanitation and usercopy LFENCE barriers.
* - 'Mitigation: usercopy/swapgs barriers and __user pointer sanitization'
- Protection in the kernel on a case by case base with explicit
pointer sanitation, usercopy LFENCE barriers, and swapgs LFENCE
barriers.
However, the protections are put in place on a case by case basis,
and there is no guarantee that all possible attack vectors for Spectre
variant 1 are covered.
The spectre_v2 kernel file reports if the kernel has been compiled with
retpoline mitigation or if the CPU has hardware mitigation, and if the
CPU has support for additional process-specific mitigation.
This file also reports CPU features enabled by microcode to mitigate
attack between user processes:
1. Indirect Branch Prediction Barrier (IBPB) to add additional
isolation between processes of different users.
2. Single Thread Indirect Branch Predictors (STIBP) to add additional
isolation between CPU threads running on the same core.
These CPU features may impact performance when used and can be enabled
per process on a case-by-case base.
The sysfs file showing Spectre variant 2 mitigation status is:
/sys/devices/system/cpu/vulnerabilities/spectre_v2
The possible values in this file are:
- Kernel status:
==================================== =================================
'Not affected' The processor is not vulnerable
'Vulnerable' Vulnerable, no mitigation
'Mitigation: Full generic retpoline' Software-focused mitigation
'Mitigation: Full AMD retpoline' AMD-specific software mitigation
'Mitigation: Enhanced IBRS' Hardware-focused mitigation
==================================== =================================
- Firmware status: Show if Indirect Branch Restricted Speculation (IBRS) is
used to protect against Spectre variant 2 attacks when calling firmware (x86 only).
========== =============================================================
'IBRS_FW' Protection against user program attacks when calling firmware
========== =============================================================
- Indirect branch prediction barrier (IBPB) status for protection between
processes of different users. This feature can be controlled through
prctl() per process, or through kernel command line options. This is
an x86 only feature. For more details see below.
=================== ========================================================
'IBPB: disabled' IBPB unused
'IBPB: always-on' Use IBPB on all tasks
'IBPB: conditional' Use IBPB on SECCOMP or indirect branch restricted tasks
=================== ========================================================
- Single threaded indirect branch prediction (STIBP) status for protection
between different hyper threads. This feature can be controlled through
prctl per process, or through kernel command line options. This is x86
only feature. For more details see below.
==================== ========================================================
'STIBP: disabled' STIBP unused
'STIBP: forced' Use STIBP on all tasks
'STIBP: conditional' Use STIBP on SECCOMP or indirect branch restricted tasks
==================== ========================================================
- Return stack buffer (RSB) protection status:
============= ===========================================
'RSB filling' Protection of RSB on context switch enabled
============= ===========================================
Full mitigation might require a microcode update from the CPU
vendor. When the necessary microcode is not available, the kernel will
report vulnerability.
Turning on mitigation for Spectre variant 1 and Spectre variant 2
-----------------------------------------------------------------
1. Kernel mitigation
^^^^^^^^^^^^^^^^^^^^
Spectre variant 1
~~~~~~~~~~~~~~~~~
For the Spectre variant 1, vulnerable kernel code (as determined
by code audit or scanning tools) is annotated on a case by case
basis to use nospec accessor macros for bounds clipping :ref:`[2]
<spec_ref2>` to avoid any usable disclosure gadgets. However, it may
not cover all attack vectors for Spectre variant 1.
Copy-from-user code has an LFENCE barrier to prevent the access_ok()
check from being mis-speculated. The barrier is done by the
barrier_nospec() macro.
For the swapgs variant of Spectre variant 1, LFENCE barriers are
added to interrupt, exception and NMI entry where needed. These
barriers are done by the FENCE_SWAPGS_KERNEL_ENTRY and
FENCE_SWAPGS_USER_ENTRY macros.
Spectre variant 2
~~~~~~~~~~~~~~~~~
For Spectre variant 2 mitigation, the compiler turns indirect calls or
jumps in the kernel into equivalent return trampolines (retpolines)
:ref:`[3] <spec_ref3>` :ref:`[9] <spec_ref9>` to go to the target
addresses. Speculative execution paths under retpolines are trapped
in an infinite loop to prevent any speculative execution jumping to
a gadget.
To turn on retpoline mitigation on a vulnerable CPU, the kernel
needs to be compiled with a gcc compiler that supports the
-mindirect-branch=thunk-extern -mindirect-branch-register options.
If the kernel is compiled with a Clang compiler, the compiler needs
to support -mretpoline-external-thunk option. The kernel config
CONFIG_RETPOLINE needs to be turned on, and the CPU needs to run with
the latest updated microcode.
On Intel Skylake-era systems the mitigation covers most, but not all,
cases. See :ref:`[3] <spec_ref3>` for more details.
On CPUs with hardware mitigation for Spectre variant 2 (e.g. Enhanced
IBRS on x86), retpoline is automatically disabled at run time.
The retpoline mitigation is turned on by default on vulnerable
CPUs. It can be forced on or off by the administrator
via the kernel command line and sysfs control files. See
:ref:`spectre_mitigation_control_command_line`.
On x86, indirect branch restricted speculation is turned on by default
before invoking any firmware code to prevent Spectre variant 2 exploits
using the firmware.
Using kernel address space randomization (CONFIG_RANDOMIZE_SLAB=y
and CONFIG_SLAB_FREELIST_RANDOM=y in the kernel configuration) makes
attacks on the kernel generally more difficult.
2. User program mitigation
^^^^^^^^^^^^^^^^^^^^^^^^^^
User programs can mitigate Spectre variant 1 using LFENCE or "bounds
clipping". For more details see :ref:`[2] <spec_ref2>`.
For Spectre variant 2 mitigation, individual user programs
can be compiled with return trampolines for indirect branches.
This protects them from consuming poisoned entries in the branch
target buffer left by malicious software. Alternatively, the
programs can disable their indirect branch speculation via prctl()
(See :ref:`Documentation/userspace-api/spec_ctrl.rst <set_spec_ctrl>`).
On x86, this will turn on STIBP to guard against attacks from the
sibling thread when the user program is running, and use IBPB to
flush the branch target buffer when switching to/from the program.
Restricting indirect branch speculation on a user program will
also prevent the program from launching a variant 2 attack
on x86. All sand-boxed SECCOMP programs have indirect branch
speculation restricted by default. Administrators can change
that behavior via the kernel command line and sysfs control files.
See :ref:`spectre_mitigation_control_command_line`.
Programs that disable their indirect branch speculation will have
more overhead and run slower.
User programs should use address space randomization
(/proc/sys/kernel/randomize_va_space = 1 or 2) to make attacks more
difficult.
3. VM mitigation
^^^^^^^^^^^^^^^^
Within the kernel, Spectre variant 1 attacks from rogue guests are
mitigated on a case by case basis in VM exit paths. Vulnerable code
uses nospec accessor macros for "bounds clipping", to avoid any
usable disclosure gadgets. However, this may not cover all variant
1 attack vectors.
For Spectre variant 2 attacks from rogue guests to the kernel, the
Linux kernel uses retpoline or Enhanced IBRS to prevent consumption of
poisoned entries in branch target buffer left by rogue guests. It also
flushes the return stack buffer on every VM exit to prevent a return
stack buffer underflow so poisoned branch target buffer could be used,
or attacker guests leaving poisoned entries in the return stack buffer.
To mitigate guest-to-guest attacks in the same CPU hardware thread,
the branch target buffer is sanitized by flushing before switching
to a new guest on a CPU.
The above mitigations are turned on by default on vulnerable CPUs.
To mitigate guest-to-guest attacks from sibling thread when SMT is
in use, an untrusted guest running in the sibling thread can have
its indirect branch speculation disabled by administrator via prctl().
The kernel also allows guests to use any microcode based mitigation
they choose to use (such as IBPB or STIBP on x86) to protect themselves.
.. _spectre_mitigation_control_command_line:
Mitigation control on the kernel command line
---------------------------------------------
Spectre variant 2 mitigation can be disabled or force enabled at the
kernel command line.
nospectre_v1
[X86,PPC] Disable mitigations for Spectre Variant 1
(bounds check bypass). With this option data leaks are
possible in the system.
nospectre_v2
[X86] Disable all mitigations for the Spectre variant 2
(indirect branch prediction) vulnerability. System may
allow data leaks with this option, which is equivalent
to spectre_v2=off.
spectre_v2=
[X86] Control mitigation of Spectre variant 2
(indirect branch speculation) vulnerability.
The default operation protects the kernel from
user space attacks.
on
unconditionally enable, implies
spectre_v2_user=on
off
unconditionally disable, implies
spectre_v2_user=off
auto
kernel detects whether your CPU model is
vulnerable
Selecting 'on' will, and 'auto' may, choose a
mitigation method at run time according to the
CPU, the available microcode, the setting of the
CONFIG_RETPOLINE configuration option, and the
compiler with which the kernel was built.
Selecting 'on' will also enable the mitigation
against user space to user space task attacks.
Selecting 'off' will disable both the kernel and
the user space protections.
Specific mitigations can also be selected manually:
retpoline
replace indirect branches
retpoline,generic
google's original retpoline
retpoline,amd
AMD-specific minimal thunk
Not specifying this option is equivalent to
spectre_v2=auto.
For user space mitigation:
spectre_v2_user=
[X86] Control mitigation of Spectre variant 2
(indirect branch speculation) vulnerability between
user space tasks
on
Unconditionally enable mitigations. Is
enforced by spectre_v2=on
off
Unconditionally disable mitigations. Is
enforced by spectre_v2=off
prctl
Indirect branch speculation is enabled,
but mitigation can be enabled via prctl
per thread. The mitigation control state
is inherited on fork.
prctl,ibpb
Like "prctl" above, but only STIBP is
controlled per thread. IBPB is issued
always when switching between different user
space processes.
seccomp
Same as "prctl" above, but all seccomp
threads will enable the mitigation unless
they explicitly opt out.
seccomp,ibpb
Like "seccomp" above, but only STIBP is
controlled per thread. IBPB is issued
always when switching between different
user space processes.
auto
Kernel selects the mitigation depending on
the available CPU features and vulnerability.
Default mitigation:
If CONFIG_SECCOMP=y then "seccomp", otherwise "prctl"
Not specifying this option is equivalent to
spectre_v2_user=auto.
In general the kernel by default selects
reasonable mitigations for the current CPU. To
disable Spectre variant 2 mitigations, boot with
spectre_v2=off. Spectre variant 1 mitigations
cannot be disabled.
Mitigation selection guide
--------------------------
1. Trusted userspace
^^^^^^^^^^^^^^^^^^^^
If all userspace applications are from trusted sources and do not
execute externally supplied untrusted code, then the mitigations can
be disabled.
2. Protect sensitive programs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For security-sensitive programs that have secrets (e.g. crypto
keys), protection against Spectre variant 2 can be put in place by
disabling indirect branch speculation when the program is running
(See :ref:`Documentation/userspace-api/spec_ctrl.rst <set_spec_ctrl>`).
3. Sandbox untrusted programs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Untrusted programs that could be a source of attacks can be cordoned
off by disabling their indirect branch speculation when they are run
(See :ref:`Documentation/userspace-api/spec_ctrl.rst <set_spec_ctrl>`).
This prevents untrusted programs from polluting the branch target
buffer. All programs running in SECCOMP sandboxes have indirect
branch speculation restricted by default. This behavior can be
changed via the kernel command line and sysfs control files. See
:ref:`spectre_mitigation_control_command_line`.
3. High security mode
^^^^^^^^^^^^^^^^^^^^^
All Spectre variant 2 mitigations can be forced on
at boot time for all programs (See the "on" option in
:ref:`spectre_mitigation_control_command_line`). This will add
overhead as indirect branch speculations for all programs will be
restricted.
On x86, branch target buffer will be flushed with IBPB when switching
to a new program. STIBP is left on all the time to protect programs
against variant 2 attacks originating from programs running on
sibling threads.
Alternatively, STIBP can be used only when running programs
whose indirect branch speculation is explicitly disabled,
while IBPB is still used all the time when switching to a new
program to clear the branch target buffer (See "ibpb" option in
:ref:`spectre_mitigation_control_command_line`). This "ibpb" option
has less performance cost than the "on" option, which leaves STIBP
on all the time.
References on Spectre
---------------------
Intel white papers:
.. _spec_ref1:
[1] `Intel analysis of speculative execution side channels <https://newsroom.intel.com/wp-content/uploads/sites/11/2018/01/Intel-Analysis-of-Speculative-Execution-Side-Channels.pdf>`_.
.. _spec_ref2:
[2] `Bounds check bypass <https://software.intel.com/security-software-guidance/software-guidance/bounds-check-bypass>`_.
.. _spec_ref3:
[3] `Deep dive: Retpoline: A branch target injection mitigation <https://software.intel.com/security-software-guidance/insights/deep-dive-retpoline-branch-target-injection-mitigation>`_.
.. _spec_ref4:
[4] `Deep Dive: Single Thread Indirect Branch Predictors <https://software.intel.com/security-software-guidance/insights/deep-dive-single-thread-indirect-branch-predictors>`_.
AMD white papers:
.. _spec_ref5:
[5] `AMD64 technology indirect branch control extension <https://developer.amd.com/wp-content/resources/Architecture_Guidelines_Update_Indirect_Branch_Control.pdf>`_.
.. _spec_ref6:
[6] `Software techniques for managing speculation on AMD processors <https://developer.amd.com/wp-content/resources/90343-B_SoftwareTechniquesforManagingSpeculation_WP_7-18Update_FNL.pdf>`_.
ARM white papers:
.. _spec_ref7:
[7] `Cache speculation side-channels <https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/download-the-whitepaper>`_.
.. _spec_ref8:
[8] `Cache speculation issues update <https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/latest-updates/cache-speculation-issues-update>`_.
Google white paper:
.. _spec_ref9:
[9] `Retpoline: a software construct for preventing branch-target-injection <https://support.google.com/faqs/answer/7625886>`_.
MIPS white paper:
.. _spec_ref10:
[10] `MIPS: response on speculative execution and side channel vulnerabilities <https://www.mips.com/blog/mips-response-on-speculative-execution-and-side-channel-vulnerabilities/>`_.
Academic papers:
.. _spec_ref11:
[11] `Spectre Attacks: Exploiting Speculative Execution <https://spectreattack.com/spectre.pdf>`_.
.. _spec_ref12:
[12] `NetSpectre: Read Arbitrary Memory over Network <https://arxiv.org/abs/1807.10535>`_.
.. _spec_ref13:
[13] `Spectre Returns! Speculation Attacks using the Return Stack Buffer <https://www.usenix.org/system/files/conference/woot18/woot18-paper-koruyeh.pdf>`_.

View File

@ -0,0 +1,279 @@
.. SPDX-License-Identifier: GPL-2.0
TAA - TSX Asynchronous Abort
======================================
TAA is a hardware vulnerability that allows unprivileged speculative access to
data which is available in various CPU internal buffers by using asynchronous
aborts within an Intel TSX transactional region.
Affected processors
-------------------
This vulnerability only affects Intel processors that support Intel
Transactional Synchronization Extensions (TSX) when the TAA_NO bit (bit 8)
is 0 in the IA32_ARCH_CAPABILITIES MSR. On processors where the MDS_NO bit
(bit 5) is 0 in the IA32_ARCH_CAPABILITIES MSR, the existing MDS mitigations
also mitigate against TAA.
Whether a processor is affected or not can be read out from the TAA
vulnerability file in sysfs. See :ref:`tsx_async_abort_sys_info`.
Related CVEs
------------
The following CVE entry is related to this TAA issue:
============== ===== ===================================================
CVE-2019-11135 TAA TSX Asynchronous Abort (TAA) condition on some
microprocessors utilizing speculative execution may
allow an authenticated user to potentially enable
information disclosure via a side channel with
local access.
============== ===== ===================================================
Problem
-------
When performing store, load or L1 refill operations, processors write
data into temporary microarchitectural structures (buffers). The data in
those buffers can be forwarded to load operations as an optimization.
Intel TSX is an extension to the x86 instruction set architecture that adds
hardware transactional memory support to improve performance of multi-threaded
software. TSX lets the processor expose and exploit concurrency hidden in an
application due to dynamically avoiding unnecessary synchronization.
TSX supports atomic memory transactions that are either committed (success) or
aborted. During an abort, operations that happened within the transactional region
are rolled back. An asynchronous abort takes place, among other options, when a
different thread accesses a cache line that is also used within the transactional
region when that access might lead to a data race.
Immediately after an uncompleted asynchronous abort, certain speculatively
executed loads may read data from those internal buffers and pass it to dependent
operations. This can be then used to infer the value via a cache side channel
attack.
Because the buffers are potentially shared between Hyper-Threads cross
Hyper-Thread attacks are possible.
The victim of a malicious actor does not need to make use of TSX. Only the
attacker needs to begin a TSX transaction and raise an asynchronous abort
which in turn potenitally leaks data stored in the buffers.
More detailed technical information is available in the TAA specific x86
architecture section: :ref:`Documentation/x86/tsx_async_abort.rst <tsx_async_abort>`.
Attack scenarios
----------------
Attacks against the TAA vulnerability can be implemented from unprivileged
applications running on hosts or guests.
As for MDS, the attacker has no control over the memory addresses that can
be leaked. Only the victim is responsible for bringing data to the CPU. As
a result, the malicious actor has to sample as much data as possible and
then postprocess it to try to infer any useful information from it.
A potential attacker only has read access to the data. Also, there is no direct
privilege escalation by using this technique.
.. _tsx_async_abort_sys_info:
TAA system information
-----------------------
The Linux kernel provides a sysfs interface to enumerate the current TAA status
of mitigated systems. The relevant sysfs file is:
/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
The possible values in this file are:
.. list-table::
* - 'Vulnerable'
- The CPU is affected by this vulnerability and the microcode and kernel mitigation are not applied.
* - 'Vulnerable: Clear CPU buffers attempted, no microcode'
- The system tries to clear the buffers but the microcode might not support the operation.
* - 'Mitigation: Clear CPU buffers'
- The microcode has been updated to clear the buffers. TSX is still enabled.
* - 'Mitigation: TSX disabled'
- TSX is disabled.
* - 'Not affected'
- The CPU is not affected by this issue.
.. _ucode_needed:
Best effort mitigation mode
^^^^^^^^^^^^^^^^^^^^^^^^^^^
If the processor is vulnerable, but the availability of the microcode-based
mitigation mechanism is not advertised via CPUID the kernel selects a best
effort mitigation mode. This mode invokes the mitigation instructions
without a guarantee that they clear the CPU buffers.
This is done to address virtualization scenarios where the host has the
microcode update applied, but the hypervisor is not yet updated to expose the
CPUID to the guest. If the host has updated microcode the protection takes
effect; otherwise a few CPU cycles are wasted pointlessly.
The state in the tsx_async_abort sysfs file reflects this situation
accordingly.
Mitigation mechanism
--------------------
The kernel detects the affected CPUs and the presence of the microcode which is
required. If a CPU is affected and the microcode is available, then the kernel
enables the mitigation by default.
The mitigation can be controlled at boot time via a kernel command line option.
See :ref:`taa_mitigation_control_command_line`.
.. _virt_mechanism:
Virtualization mitigation
^^^^^^^^^^^^^^^^^^^^^^^^^
Affected systems where the host has TAA microcode and TAA is mitigated by
having disabled TSX previously, are not vulnerable regardless of the status
of the VMs.
In all other cases, if the host either does not have the TAA microcode or
the kernel is not mitigated, the system might be vulnerable.
.. _taa_mitigation_control_command_line:
Mitigation control on the kernel command line
---------------------------------------------
The kernel command line allows to control the TAA mitigations at boot time with
the option "tsx_async_abort=". The valid arguments for this option are:
============ =============================================================
off This option disables the TAA mitigation on affected platforms.
If the system has TSX enabled (see next parameter) and the CPU
is affected, the system is vulnerable.
full TAA mitigation is enabled. If TSX is enabled, on an affected
system it will clear CPU buffers on ring transitions. On
systems which are MDS-affected and deploy MDS mitigation,
TAA is also mitigated. Specifying this option on those
systems will have no effect.
full,nosmt The same as tsx_async_abort=full, with SMT disabled on
vulnerable CPUs that have TSX enabled. This is the complete
mitigation. When TSX is disabled, SMT is not disabled because
CPU is not vulnerable to cross-thread TAA attacks.
============ =============================================================
Not specifying this option is equivalent to "tsx_async_abort=full". For
processors that are affected by both TAA and MDS, specifying just
"tsx_async_abort=off" without an accompanying "mds=off" will have no
effect as the same mitigation is used for both vulnerabilities.
The kernel command line also allows to control the TSX feature using the
parameter "tsx=" on CPUs which support TSX control. MSR_IA32_TSX_CTRL is used
to control the TSX feature and the enumeration of the TSX feature bits (RTM
and HLE) in CPUID.
The valid options are:
============ =============================================================
off Disables TSX on the system.
Note that this option takes effect only on newer CPUs which are
not vulnerable to MDS, i.e., have MSR_IA32_ARCH_CAPABILITIES.MDS_NO=1
and which get the new IA32_TSX_CTRL MSR through a microcode
update. This new MSR allows for the reliable deactivation of
the TSX functionality.
on Enables TSX.
Although there are mitigations for all known security
vulnerabilities, TSX has been known to be an accelerator for
several previous speculation-related CVEs, and so there may be
unknown security risks associated with leaving it enabled.
auto Disables TSX if X86_BUG_TAA is present, otherwise enables TSX
on the system.
============ =============================================================
Not specifying this option is equivalent to "tsx=off".
The following combinations of the "tsx_async_abort" and "tsx" are possible. For
affected platforms tsx=auto is equivalent to tsx=off and the result will be:
========= ========================== =========================================
tsx=on tsx_async_abort=full The system will use VERW to clear CPU
buffers. Cross-thread attacks are still
possible on SMT machines.
tsx=on tsx_async_abort=full,nosmt As above, cross-thread attacks on SMT
mitigated.
tsx=on tsx_async_abort=off The system is vulnerable.
tsx=off tsx_async_abort=full TSX might be disabled if microcode
provides a TSX control MSR. If so,
system is not vulnerable.
tsx=off tsx_async_abort=full,nosmt Ditto
tsx=off tsx_async_abort=off ditto
========= ========================== =========================================
For unaffected platforms "tsx=on" and "tsx_async_abort=full" does not clear CPU
buffers. For platforms without TSX control (MSR_IA32_ARCH_CAPABILITIES.MDS_NO=0)
"tsx" command line argument has no effect.
For the affected platforms below table indicates the mitigation status for the
combinations of CPUID bit MD_CLEAR and IA32_ARCH_CAPABILITIES MSR bits MDS_NO
and TSX_CTRL_MSR.
======= ========= ============= ========================================
MDS_NO MD_CLEAR TSX_CTRL_MSR Status
======= ========= ============= ========================================
0 0 0 Vulnerable (needs microcode)
0 1 0 MDS and TAA mitigated via VERW
1 1 0 MDS fixed, TAA vulnerable if TSX enabled
because MD_CLEAR has no meaning and
VERW is not guaranteed to clear buffers
1 X 1 MDS fixed, TAA can be mitigated by
VERW or TSX_CTRL_MSR
======= ========= ============= ========================================
Mitigation selection guide
--------------------------
1. Trusted userspace and guests
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If all user space applications are from a trusted source and do not execute
untrusted code which is supplied externally, then the mitigation can be
disabled. The same applies to virtualized environments with trusted guests.
2. Untrusted userspace and guests
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If there are untrusted applications or guests on the system, enabling TSX
might allow a malicious actor to leak data from the host or from other
processes running on the same physical core.
If the microcode is available and the TSX is disabled on the host, attacks
are prevented in a virtualized environment as well, even if the VMs do not
explicitly enable the mitigation.
.. _taa_default_mitigations:
Default mitigations
-------------------
The kernel's default action for vulnerable processors is:
- Deploy TSX disable mitigation (tsx_async_abort=full tsx=off).

View File

@ -17,14 +17,12 @@ etc.
kernel-parameters
devices
This section describes CPU vulnerabilities and provides an overview of the
possible mitigations along with guidance for selecting mitigations if they
are configurable at compile, boot or run time.
This section describes CPU vulnerabilities and their mitigations.
.. toctree::
:maxdepth: 1
l1tf
hw-vuln/index
Here is a set of documents aimed at users who are trying to track down
problems and bugs in particular.

View File

@ -1852,6 +1852,25 @@
KVM MMU at runtime.
Default is 0 (off)
kvm.nx_huge_pages=
[KVM] Controls the software workaround for the
X86_BUG_ITLB_MULTIHIT bug.
force : Always deploy workaround.
off : Never deploy workaround.
auto : Deploy workaround based on the presence of
X86_BUG_ITLB_MULTIHIT.
Default is 'auto'.
If the software workaround is enabled for the host,
guests do need not to enable it for nested guests.
kvm.nx_huge_pages_recovery_ratio=
[KVM] Controls how many 4KiB pages are periodically zapped
back to huge pages. 0 disables the recovery, otherwise if
the value is N KVM will zap 1/Nth of the 4KiB pages every
minute. The default is 60.
kvm-amd.nested= [KVM,AMD] Allow nested virtualization in KVM/SVM.
Default is 1 (enabled)
@ -1971,7 +1990,7 @@
Default is 'flush'.
For details see: Documentation/admin-guide/l1tf.rst
For details see: Documentation/admin-guide/hw-vuln/l1tf.rst
l2cr= [PPC]
@ -2214,6 +2233,38 @@
Format: <first>,<last>
Specifies range of consoles to be captured by the MDA.
mds= [X86,INTEL]
Control mitigation for the Micro-architectural Data
Sampling (MDS) vulnerability.
Certain CPUs are vulnerable to an exploit against CPU
internal buffers which can forward information to a
disclosure gadget under certain conditions.
In vulnerable processors, the speculatively
forwarded data can be used in a cache side channel
attack, to access data to which the attacker does
not have direct access.
This parameter controls the MDS mitigation. The
options are:
full - Enable MDS mitigation on vulnerable CPUs
full,nosmt - Enable MDS mitigation and disable
SMT on vulnerable CPUs
off - Unconditionally disable MDS mitigation
On TAA-affected machines, mds=off can be prevented by
an active TAA mitigation as both vulnerabilities are
mitigated with the same mechanism so in order to disable
this mitigation, you need to specify tsx_async_abort=off
too.
Not specifying this option is equivalent to
mds=full.
For details see: Documentation/admin-guide/hw-vuln/mds.rst
mem=nn[KMG] [KNL,BOOT] Force usage of a specific amount of memory
Amount of memory to be used when the kernel is not able
to see the whole system memory or for test.
@ -2362,6 +2413,51 @@
in the "bleeding edge" mini2440 support kernel at
http://repo.or.cz/w/linux-2.6/mini2440.git
mitigations=
[X86,PPC,S390,ARM64] Control optional mitigations for
CPU vulnerabilities. This is a set of curated,
arch-independent options, each of which is an
aggregation of existing arch-specific options.
off
Disable all optional CPU mitigations. This
improves system performance, but it may also
expose users to several CPU vulnerabilities.
Equivalent to: nopti [X86,PPC]
kpti=0 [ARM64]
nospectre_v1 [PPC]
nobp=0 [S390]
nospectre_v1 [X86]
nospectre_v2 [X86,PPC,S390,ARM64]
spectre_v2_user=off [X86]
spec_store_bypass_disable=off [X86,PPC]
ssbd=force-off [ARM64]
l1tf=off [X86]
mds=off [X86]
tsx_async_abort=off [X86]
kvm.nx_huge_pages=off [X86]
Exceptions:
This does not have any effect on
kvm.nx_huge_pages when
kvm.nx_huge_pages=force.
auto (default)
Mitigate all CPU vulnerabilities, but leave SMT
enabled, even if it's vulnerable. This is for
users who don't want to be surprised by SMT
getting disabled across kernel upgrades, or who
have other ways of avoiding SMT-based attacks.
Equivalent to: (default behavior)
auto,nosmt
Mitigate all CPU vulnerabilities, disabling SMT
if needed. This is for users who always want to
be fully mitigated, even if it means losing SMT.
Equivalent to: l1tf=flush,nosmt [X86]
mds=full,nosmt [X86]
tsx_async_abort=full,nosmt [X86]
mminit_loglevel=
[KNL] When CONFIG_DEBUG_MEMORY_INIT is set, this
parameter allows control of the logging verbosity for
@ -2680,10 +2776,14 @@
nosmt=force: Force disable SMT, cannot be undone
via the sysfs control file.
nospectre_v2 [X86] Disable all mitigations for the Spectre variant 2
(indirect branch prediction) vulnerability. System may
allow data leaks with this option, which is equivalent
to spectre_v2=off.
nospectre_v1 [X66, PPC] Disable mitigations for Spectre Variant 1
(bounds check bypass). With this option data leaks
are possible in the system.
nospectre_v2 [X86,PPC_FSL_BOOK3E,ARM64] Disable all mitigations for
the Spectre variant 2 (indirect branch prediction)
vulnerability. System may allow data leaks with this
option.
nospec_store_bypass_disable
[HW] Disable all mitigations for the Speculative Store Bypass vulnerability
@ -3723,6 +3823,13 @@
Run specified binary instead of /init from the ramdisk,
used for early userspace startup. See initrd.
rdrand= [X86]
force - Override the decision by the kernel to hide the
advertisement of RDRAND support (this affects
certain AMD processors because of buggy BIOS
support, specifically around the suspend/resume
path).
rdt= [HW,X86,RDT]
Turn on/off individual RDT features. List is:
cmt, mbmtotal, mbmlocal, l3cat, l3cdp, l2cat, mba.
@ -4431,6 +4538,76 @@
platforms where RDTSC is slow and this accounting
can add overhead.
tsx= [X86] Control Transactional Synchronization
Extensions (TSX) feature in Intel processors that
support TSX control.
This parameter controls the TSX feature. The options are:
on - Enable TSX on the system. Although there are
mitigations for all known security vulnerabilities,
TSX has been known to be an accelerator for
several previous speculation-related CVEs, and
so there may be unknown security risks associated
with leaving it enabled.
off - Disable TSX on the system. (Note that this
option takes effect only on newer CPUs which are
not vulnerable to MDS, i.e., have
MSR_IA32_ARCH_CAPABILITIES.MDS_NO=1 and which get
the new IA32_TSX_CTRL MSR through a microcode
update. This new MSR allows for the reliable
deactivation of the TSX functionality.)
auto - Disable TSX if X86_BUG_TAA is present,
otherwise enable TSX on the system.
Not specifying this option is equivalent to tsx=off.
See Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
for more details.
tsx_async_abort= [X86,INTEL] Control mitigation for the TSX Async
Abort (TAA) vulnerability.
Similar to Micro-architectural Data Sampling (MDS)
certain CPUs that support Transactional
Synchronization Extensions (TSX) are vulnerable to an
exploit against CPU internal buffers which can forward
information to a disclosure gadget under certain
conditions.
In vulnerable processors, the speculatively forwarded
data can be used in a cache side channel attack, to
access data to which the attacker does not have direct
access.
This parameter controls the TAA mitigation. The
options are:
full - Enable TAA mitigation on vulnerable CPUs
if TSX is enabled.
full,nosmt - Enable TAA mitigation and disable SMT on
vulnerable CPUs. If TSX is disabled, SMT
is not disabled because CPU is not
vulnerable to cross-thread TAA attacks.
off - Unconditionally disable TAA mitigation
On MDS-affected machines, tsx_async_abort=off can be
prevented by an active MDS mitigation as both vulnerabilities
are mitigated with the same mechanism so in order to disable
this mitigation, you need to specify mds=off too.
Not specifying this option is equivalent to
tsx_async_abort=full. On CPUs which are MDS affected
and deploy MDS mitigation, TAA mitigation is not
required and doesn't provide any additional
mitigation.
For details see:
Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
turbografx.map[2|3]= [HW,JOY]
TurboGraFX parallel port interface
Format:
@ -4516,13 +4693,13 @@
Flags is a set of characters, each corresponding
to a common usb-storage quirk flag as follows:
a = SANE_SENSE (collect more than 18 bytes
of sense data);
of sense data, not on uas);
b = BAD_SENSE (don't collect more than 18
bytes of sense data);
bytes of sense data, not on uas);
c = FIX_CAPACITY (decrease the reported
device capacity by one sector);
d = NO_READ_DISC_INFO (don't use
READ_DISC_INFO command);
READ_DISC_INFO command, not on uas);
e = NO_READ_CAPACITY_16 (don't use
READ_CAPACITY_16 command);
f = NO_REPORT_OPCODES (don't use report opcodes
@ -4537,17 +4714,18 @@
j = NO_REPORT_LUNS (don't use report luns
command, uas only);
l = NOT_LOCKABLE (don't try to lock and
unlock ejectable media);
unlock ejectable media, not on uas);
m = MAX_SECTORS_64 (don't transfer more
than 64 sectors = 32 KB at a time);
than 64 sectors = 32 KB at a time,
not on uas);
n = INITIAL_READ10 (force a retry of the
initial READ(10) command);
initial READ(10) command, not on uas);
o = CAPACITY_OK (accept the capacity
reported by the device);
reported by the device, not on uas);
p = WRITE_CACHE (the device cache is ON
by default);
by default, not on uas);
r = IGNORE_RESIDUE (the device reports
bogus residue values);
bogus residue values, not on uas);
s = SINGLE_LUN (the device has only one
Logical Unit);
t = NO_ATA_1X (don't allow ATA(12) and ATA(16)
@ -4556,7 +4734,8 @@
w = NO_WP_DETECT (don't test whether the
medium is write-protected).
y = ALWAYS_SYNC (issue a SYNCHRONIZE_CACHE
even if the device claims no cache)
even if the device claims no cache,
not on uas)
Example: quirks=0419:aaf5:rl,0421:0433:rc
user_debug= [KNL,ARM]
@ -4801,6 +4980,10 @@
the unplug protocol
never -- do not unplug even if version check succeeds
xen_legacy_crash [X86,XEN]
Crash from Xen panic notifier, without executing late
panic() code such as dumping handler.
xen_nopvspin [X86,XEN]
Disables the ticketlock slowpath using Xen PV
optimizations.

View File

@ -6,7 +6,7 @@ TL;DR summary
* Use only NEON instructions, or VFP instructions that don't rely on support
code
* Isolate your NEON code in a separate compilation unit, and compile it with
'-mfpu=neon -mfloat-abi=softfp'
'-march=armv7-a -mfpu=neon -mfloat-abi=softfp'
* Put kernel_neon_begin() and kernel_neon_end() calls around the calls into your
NEON code
* Don't sleep in your NEON code, and be aware that it will be executed with
@ -87,7 +87,7 @@ instructions appearing in unexpected places if no special care is taken.
Therefore, the recommended and only supported way of using NEON/VFP in the
kernel is by adhering to the following rules:
* isolate the NEON code in a separate compilation unit and compile it with
'-mfpu=neon -mfloat-abi=softfp';
'-march=armv7-a -mfpu=neon -mfloat-abi=softfp';
* issue the calls to kernel_neon_begin(), kernel_neon_end() as well as the calls
into the unit containing the NEON code from a compilation unit which is *not*
built with the GCC flag '-mfpu=neon' set.

View File

@ -110,7 +110,17 @@ infrastructure:
x--------------------------------------------------x
| Name | bits | visible |
|--------------------------------------------------|
| RES0 | [63-32] | n |
| TS | [55-52] | y |
|--------------------------------------------------|
| FHM | [51-48] | y |
|--------------------------------------------------|
| DP | [47-44] | y |
|--------------------------------------------------|
| SM4 | [43-40] | y |
|--------------------------------------------------|
| SM3 | [39-36] | y |
|--------------------------------------------------|
| SHA3 | [35-32] | y |
|--------------------------------------------------|
| RDM | [31-28] | y |
|--------------------------------------------------|
@ -123,8 +133,6 @@ infrastructure:
| SHA1 | [11-8] | y |
|--------------------------------------------------|
| AES | [7-4] | y |
|--------------------------------------------------|
| RES0 | [3-0] | n |
x--------------------------------------------------x
@ -132,7 +140,9 @@ infrastructure:
x--------------------------------------------------x
| Name | bits | visible |
|--------------------------------------------------|
| RES0 | [63-28] | n |
| DIT | [51-48] | y |
|--------------------------------------------------|
| SVE | [35-32] | y |
|--------------------------------------------------|
| GIC | [27-24] | n |
|--------------------------------------------------|
@ -183,6 +193,14 @@ infrastructure:
| DPB | [3-0] | y |
x--------------------------------------------------x
5) ID_AA64MMFR2_EL1 - Memory model feature register 2
x--------------------------------------------------x
| Name | bits | visible |
|--------------------------------------------------|
| AT | [35-32] | y |
x--------------------------------------------------x
Appendix I: Example
---------------------------

View File

@ -177,6 +177,9 @@ These helper barriers exist because architectures have varying implicit
ordering on their SMP atomic primitives. For example our TSO architectures
provide full ordered atomics and these barriers are no-ops.
NOTE: when the atomic RmW ops are fully ordered, they should also imply a
compiler barrier.
Thus:
atomic_fetch_add();

View File

@ -37,7 +37,7 @@ needs_sphinx = '1.3'
extensions = ['kerneldoc', 'rstFlatTable', 'kernel_include', 'cdomain', 'kfigure']
# The name of the math extension changed on Sphinx 1.4
if major == 1 and minor > 3:
if (major == 1 and minor > 3) or (major > 1):
extensions.append("sphinx.ext.imgmath")
else:
extensions.append("sphinx.ext.pngmath")

View File

@ -46,7 +46,7 @@ Required properties:
Example (R-Car H3):
usb2_clksel: clock-controller@e6590630 {
compatible = "renesas,r8a77950-rcar-usb2-clock-sel",
compatible = "renesas,r8a7795-rcar-usb2-clock-sel",
"renesas,rcar-gen3-usb2-clock-sel";
reg = <0 0xe6590630 0 0x02>;
clocks = <&cpg CPG_MOD 703>, <&usb_extal>, <&usb_xtal>;

View File

@ -6,7 +6,8 @@ Required properties:
"atmel,24c00", "atmel,24c01", "atmel,24c02", "atmel,24c04",
"atmel,24c08", "atmel,24c16", "atmel,24c32", "atmel,24c64",
"atmel,24c128", "atmel,24c256", "atmel,24c512", "atmel,24c1024"
"atmel,24c128", "atmel,24c256", "atmel,24c512", "atmel,24c1024",
"atmel,24c2048"
"catalyst,24c32"
@ -23,7 +24,7 @@ Required properties:
device with <type> and manufacturer "atmel" should be used.
Possible types are:
"24c00", "24c01", "24c02", "24c04", "24c08", "24c16", "24c32", "24c64",
"24c128", "24c256", "24c512", "24c1024", "spd"
"24c128", "24c256", "24c512", "24c1024", "24c2048", "spd"
- reg : the I2C address of the EEPROM

View File

@ -73,7 +73,7 @@ Example:
};
};
port@10 {
port@a {
reg = <10>;
adv7482_txa: endpoint {
@ -83,7 +83,7 @@ Example:
};
};
port@11 {
port@b {
reg = <11>;
adv7482_txb: endpoint {

View File

@ -19,6 +19,9 @@ Optional properties:
- interrupt-names: must be "mdio_done_error" when there is a share interrupt fed
to this hardware block, or must be "mdio_done" for the first interrupt and
"mdio_error" for the second when there are separate interrupts
- clocks: A reference to the clock supplying the MDIO bus controller
- clock-frequency: the MDIO bus clock that must be output by the MDIO bus
hardware, if absent, the default hardware values are used
Child nodes of this MDIO bus controller node are standard Ethernet PHY device
nodes as described in Documentation/devicetree/bindings/net/phy.txt

View File

@ -4,6 +4,7 @@ Required properties:
- compatible: Should be one of the following:
- "microchip,mcp2510" for MCP2510.
- "microchip,mcp2515" for MCP2515.
- "microchip,mcp25625" for MCP25625.
- reg: SPI chip select.
- clocks: The clock feeding the CAN controller.
- interrupt-parent: The parent interrupt controller.

View File

@ -16,7 +16,7 @@ Required properties:
Optional properties:
- interrupts: interrupt line number for the SMI error/done interrupt
- clocks: phandle for up to three required clocks for the MDIO instance
- clocks: phandle for up to four required clocks for the MDIO instance
The child nodes of the MDIO driver are the individual PHY devices
connected to this MDIO bus. They must have a "reg" property given the

View File

@ -27,4 +27,4 @@ and valid to enable charging:
- "abracon,tc-diode": should be "standard" (0.6V) or "schottky" (0.3V)
- "abracon,tc-resistor": should be <0>, <3>, <6> or <11>. 0 disables the output
resistor, the other values are in ohm.
resistor, the other values are in kOhm.

View File

@ -8,6 +8,6 @@ Required properties:
Example:
serial@12000 {
compatible = "marvell,armada-3700-uart";
reg = <0x12000 0x400>;
reg = <0x12000 0x200>;
interrupts = <43>;
};

View File

@ -47,6 +47,8 @@ Optional properties:
from P0 to P1/P2/P3 without delay.
- snps,dis-tx-ipgap-linecheck-quirk: when set, disable u2mac linestate check
during HS transmit.
- snps,dis_metastability_quirk: when set, disable metastability workaround.
CAUTION: use only if you are absolutely sure of it.
- snps,is-utmi-l1-suspend: true when DWC3 asserts output signal
utmi_l1_suspend_n, false when asserts utmi_sleep_n
- snps,hird-threshold: HIRD threshold

View File

@ -370,11 +370,15 @@ autosuspend the interface's device. When the usage counter is = 0
then the interface is considered to be idle, and the kernel may
autosuspend the device.
Drivers need not be concerned about balancing changes to the usage
counter; the USB core will undo any remaining "get"s when a driver
is unbound from its interface. As a corollary, drivers must not call
any of the ``usb_autopm_*`` functions after their ``disconnect``
routine has returned.
Drivers must be careful to balance their overall changes to the usage
counter. Unbalanced "get"s will remain in effect when a driver is
unbound from its interface, preventing the device from going into
runtime suspend should the interface be bound to a driver again. On
the other hand, drivers are allowed to achieve this balance by calling
the ``usb_autopm_*`` functions even after their ``disconnect`` routine
has returned -- say from within a work-queue routine -- provided they
retain an active reference to the interface (via ``usb_get_intf`` and
``usb_put_intf``).
Drivers using the async routines are responsible for their own
synchronization and mutual exclusion.

View File

@ -160,7 +160,7 @@ them but you should handle them according to your needs.
UHID_OUTPUT:
This is sent if the HID device driver wants to send raw data to the I/O
device on the interrupt channel. You should read the payload and forward it to
the device. The payload is of type "struct uhid_data_req".
the device. The payload is of type "struct uhid_output_req".
This may be received even though you haven't received UHID_OPEN, yet.
UHID_GET_REPORT:

View File

@ -86,6 +86,7 @@ implementation.
:maxdepth: 2
sh/index
x86/index
Korean translations
-------------------

View File

@ -410,6 +410,7 @@ tcp_min_rtt_wlen - INTEGER
minimum RTT when it is moved to a longer path (e.g., due to traffic
engineering). A longer window makes the filter more resistant to RTT
inflations such as transient congestion. The unit is seconds.
Possible values: 0 - 86400 (1 day)
Default: 300
tcp_moderate_rcvbuf - BOOLEAN

View File

@ -218,5 +218,4 @@ All other architectures should build just fine too - but they won't have
the new syscalls yet.
Architectures need to implement the new futex_atomic_cmpxchg_inatomic()
inline function before writing up the syscalls (that function returns
-ENOSYS right now).
inline function before writing up the syscalls.

View File

@ -90,6 +90,51 @@ There are two ways in which a group may become throttled:
In case b) above, even though the child may have runtime remaining it will not
be allowed to until the parent's runtime is refreshed.
CFS Bandwidth Quota Caveats
---------------------------
Once a slice is assigned to a cpu it does not expire. However all but 1ms of
the slice may be returned to the global pool if all threads on that cpu become
unrunnable. This is configured at compile time by the min_cfs_rq_runtime
variable. This is a performance tweak that helps prevent added contention on
the global lock.
The fact that cpu-local slices do not expire results in some interesting corner
cases that should be understood.
For cgroup cpu constrained applications that are cpu limited this is a
relatively moot point because they will naturally consume the entirety of their
quota as well as the entirety of each cpu-local slice in each period. As a
result it is expected that nr_periods roughly equal nr_throttled, and that
cpuacct.usage will increase roughly equal to cfs_quota_us in each period.
For highly-threaded, non-cpu bound applications this non-expiration nuance
allows applications to briefly burst past their quota limits by the amount of
unused slice on each cpu that the task group is running on (typically at most
1ms per cpu or as defined by min_cfs_rq_runtime). This slight burst only
applies if quota had been assigned to a cpu and then not fully used or returned
in previous periods. This burst amount will not be transferred between cores.
As a result, this mechanism still strictly limits the task group to quota
average usage, albeit over a longer time window than a single period. This
also limits the burst ability to no more than 1ms per cpu. This provides
better more predictable user experience for highly threaded applications with
small quota limits on high core count machines. It also eliminates the
propensity to throttle these applications while simultanously using less than
quota amounts of cpu. Another way to say this, is that by allowing the unused
portion of a slice to remain valid across periods we have decreased the
possibility of wastefully expiring quota on cpu-local silos that don't need a
full slice's amount of cpu time.
The interaction between cpu-bound and non-cpu-bound-interactive applications
should also be considered, especially when single core usage hits 100%. If you
gave each of these applications half of a cpu-core and they both got scheduled
on the same CPU it is theoretically possible that the non-cpu bound application
will use up to 1ms additional quota in some periods, thereby preventing the
cpu-bound application from fully using its quota by that same amount. In these
instances it will be up to the CFS algorithm (see sched-design-CFS.rst) to
decide which application is chosen to run, as they will both be runnable and
have remaining quota. This runtime discrepancy will be made up in the following
periods when the interactive application idles.
Examples
--------
1. Limit a group to 1 CPU worth of runtime.

View File

@ -37,7 +37,19 @@ import glob
from docutils import nodes, statemachine
from docutils.statemachine import ViewList
from docutils.parsers.rst import directives, Directive
from sphinx.ext.autodoc import AutodocReporter
#
# AutodocReporter is only good up to Sphinx 1.7
#
import sphinx
Use_SSI = sphinx.__version__[:3] >= '1.7'
if Use_SSI:
from sphinx.util.docutils import switch_source_input
else:
from sphinx.ext.autodoc import AutodocReporter
import kernellog
__version__ = '1.0'
@ -86,7 +98,8 @@ class KernelDocDirective(Directive):
cmd += [filename]
try:
env.app.verbose('calling kernel-doc \'%s\'' % (" ".join(cmd)))
kernellog.verbose(env.app,
'calling kernel-doc \'%s\'' % (" ".join(cmd)))
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
@ -96,7 +109,8 @@ class KernelDocDirective(Directive):
if p.returncode != 0:
sys.stderr.write(err)
env.app.warn('kernel-doc \'%s\' failed with return code %d' % (" ".join(cmd), p.returncode))
kernellog.warn(env.app,
'kernel-doc \'%s\' failed with return code %d' % (" ".join(cmd), p.returncode))
return [nodes.error(None, nodes.paragraph(text = "kernel-doc missing"))]
elif env.config.kerneldoc_verbosity > 0:
sys.stderr.write(err)
@ -117,20 +131,28 @@ class KernelDocDirective(Directive):
lineoffset += 1
node = nodes.section()
buf = self.state.memo.title_styles, self.state.memo.section_level, self.state.memo.reporter
self.do_parse(result, node)
return node.children
except Exception as e: # pylint: disable=W0703
kernellog.warn(env.app, 'kernel-doc \'%s\' processing failed with: %s' %
(" ".join(cmd), str(e)))
return [nodes.error(None, nodes.paragraph(text = "kernel-doc missing"))]
def do_parse(self, result, node):
if Use_SSI:
with switch_source_input(self.state, result):
self.state.nested_parse(result, 0, node, match_titles=1)
else:
save = self.state.memo.title_styles, self.state.memo.section_level, self.state.memo.reporter
self.state.memo.reporter = AutodocReporter(result, self.state.memo.reporter)
self.state.memo.title_styles, self.state.memo.section_level = [], 0
try:
self.state.nested_parse(result, 0, node, match_titles=1)
finally:
self.state.memo.title_styles, self.state.memo.section_level, self.state.memo.reporter = buf
self.state.memo.title_styles, self.state.memo.section_level, self.state.memo.reporter = save
return node.children
except Exception as e: # pylint: disable=W0703
env.app.warn('kernel-doc \'%s\' processing failed with: %s' %
(" ".join(cmd), str(e)))
return [nodes.error(None, nodes.paragraph(text = "kernel-doc missing"))]
def setup(app):
app.add_config_value('kerneldoc_bin', None, 'env')

View File

@ -0,0 +1,28 @@
# SPDX-License-Identifier: GPL-2.0
#
# Sphinx has deprecated its older logging interface, but the replacement
# only goes back to 1.6. So here's a wrapper layer to keep around for
# as long as we support 1.4.
#
import sphinx
if sphinx.__version__[:3] >= '1.6':
UseLogging = True
from sphinx.util import logging
logger = logging.getLogger('kerneldoc')
else:
UseLogging = False
def warn(app, message):
if UseLogging:
logger.warning(message)
else:
app.warn(message)
def verbose(app, message):
if UseLogging:
logger.verbose(message)
else:
app.verbose(message)

View File

@ -60,6 +60,8 @@ import sphinx
from sphinx.util.nodes import clean_astext
from six import iteritems
import kernellog
PY3 = sys.version_info[0] == 3
if PY3:
@ -171,20 +173,20 @@ def setupTools(app):
This function is called once, when the builder is initiated.
"""
global dot_cmd, convert_cmd # pylint: disable=W0603
app.verbose("kfigure: check installed tools ...")
kernellog.verbose(app, "kfigure: check installed tools ...")
dot_cmd = which('dot')
convert_cmd = which('convert')
if dot_cmd:
app.verbose("use dot(1) from: " + dot_cmd)
kernellog.verbose(app, "use dot(1) from: " + dot_cmd)
else:
app.warn("dot(1) not found, for better output quality install "
"graphviz from http://www.graphviz.org")
kernellog.warn(app, "dot(1) not found, for better output quality install "
"graphviz from http://www.graphviz.org")
if convert_cmd:
app.verbose("use convert(1) from: " + convert_cmd)
kernellog.verbose(app, "use convert(1) from: " + convert_cmd)
else:
app.warn(
kernellog.warn(app,
"convert(1) not found, for SVG to PDF conversion install "
"ImageMagick (https://www.imagemagick.org)")
@ -220,12 +222,13 @@ def convert_image(img_node, translator, src_fname=None):
# in kernel builds, use 'make SPHINXOPTS=-v' to see verbose messages
app.verbose('assert best format for: ' + img_node['uri'])
kernellog.verbose(app, 'assert best format for: ' + img_node['uri'])
if in_ext == '.dot':
if not dot_cmd:
app.verbose("dot from graphviz not available / include DOT raw.")
kernellog.verbose(app,
"dot from graphviz not available / include DOT raw.")
img_node.replace_self(file2literal(src_fname))
elif translator.builder.format == 'latex':
@ -252,7 +255,8 @@ def convert_image(img_node, translator, src_fname=None):
if translator.builder.format == 'latex':
if convert_cmd is None:
app.verbose("no SVG to PDF conversion available / include SVG raw.")
kernellog.verbose(app,
"no SVG to PDF conversion available / include SVG raw.")
img_node.replace_self(file2literal(src_fname))
else:
dst_fname = path.join(translator.builder.outdir, fname + '.pdf')
@ -265,18 +269,19 @@ def convert_image(img_node, translator, src_fname=None):
_name = dst_fname[len(translator.builder.outdir) + 1:]
if isNewer(dst_fname, src_fname):
app.verbose("convert: {out}/%s already exists and is newer" % _name)
kernellog.verbose(app,
"convert: {out}/%s already exists and is newer" % _name)
else:
ok = False
mkdir(path.dirname(dst_fname))
if in_ext == '.dot':
app.verbose('convert DOT to: {out}/' + _name)
kernellog.verbose(app, 'convert DOT to: {out}/' + _name)
ok = dot2format(app, src_fname, dst_fname)
elif in_ext == '.svg':
app.verbose('convert SVG to: {out}/' + _name)
kernellog.verbose(app, 'convert SVG to: {out}/' + _name)
ok = svg2pdf(app, src_fname, dst_fname)
if not ok:
@ -305,7 +310,8 @@ def dot2format(app, dot_fname, out_fname):
with open(out_fname, "w") as out:
exit_code = subprocess.call(cmd, stdout = out)
if exit_code != 0:
app.warn("Error #%d when calling: %s" % (exit_code, " ".join(cmd)))
kernellog.warn(app,
"Error #%d when calling: %s" % (exit_code, " ".join(cmd)))
return bool(exit_code == 0)
def svg2pdf(app, svg_fname, pdf_fname):
@ -322,7 +328,7 @@ def svg2pdf(app, svg_fname, pdf_fname):
# use stdout and stderr from parent
exit_code = subprocess.call(cmd)
if exit_code != 0:
app.warn("Error #%d when calling: %s" % (exit_code, " ".join(cmd)))
kernellog.warn(app, "Error #%d when calling: %s" % (exit_code, " ".join(cmd)))
return bool(exit_code == 0)
@ -415,15 +421,15 @@ def visit_kernel_render(self, node):
app = self.builder.app
srclang = node.get('srclang')
app.verbose('visit kernel-render node lang: "%s"' % (srclang))
kernellog.verbose('visit kernel-render node lang: "%s"' % (srclang))
tmp_ext = RENDER_MARKUP_EXT.get(srclang, None)
if tmp_ext is None:
app.warn('kernel-render: "%s" unknow / include raw.' % (srclang))
kernellog.warn('kernel-render: "%s" unknow / include raw.' % (srclang))
return
if not dot_cmd and tmp_ext == '.dot':
app.verbose("dot from graphviz not available / include raw.")
kernellog.verbose("dot from graphviz not available / include raw.")
return
literal_block = node[0]

View File

@ -91,6 +91,14 @@ Values :
0 - disable JIT kallsyms export (default value)
1 - enable JIT kallsyms export for privileged users only
bpf_jit_limit
-------------
This enforces a global limit for memory allocations to the BPF JIT
compiler in order to reject unprivileged JIT requests once it has
been surpassed. bpf_jit_limit contains the value of the global limit
in bytes.
dev_weight
--------------

View File

@ -1,138 +0,0 @@
Copyright (C) 1999, 2000 Bruce Tenison
Portions Copyright (C) 1999, 2000 David Nelson
Thanks to David Nelson for guidance and the usage of the scanner.txt
and scanner.c files to model our driver and this informative file.
Mar. 2, 2000
CHANGES
- Initial Revision
OVERVIEW
This README will address issues regarding how to configure the kernel
to access a RIO 500 mp3 player.
Before I explain how to use this to access the Rio500 please be warned:
W A R N I N G:
--------------
Please note that this software is still under development. The authors
are in no way responsible for any damage that may occur, no matter how
inconsequential.
It seems that the Rio has a problem when sending .mp3 with low batteries.
I suggest when the batteries are low and you want to transfer stuff that you
replace it with a fresh one. In my case, what happened is I lost two 16kb
blocks (they are no longer usable to store information to it). But I don't
know if that's normal or not; it could simply be a problem with the flash
memory.
In an extreme case, I left my Rio playing overnight and the batteries wore
down to nothing and appear to have corrupted the flash memory. My RIO
needed to be replaced as a result. Diamond tech support is aware of the
problem. Do NOT allow your batteries to wear down to nothing before
changing them. It appears RIO 500 firmware does not handle low battery
power well at all.
On systems with OHCI controllers, the kernel OHCI code appears to have
power on problems with some chipsets. If you are having problems
connecting to your RIO 500, try turning it on first and then plugging it
into the USB cable.
Contact information:
--------------------
The main page for the project is hosted at sourceforge.net in the following
URL: <http://rio500.sourceforge.net>. You can also go to the project's
sourceforge home page at: <http://sourceforge.net/projects/rio500/>.
There is also a mailing list: rio500-users@lists.sourceforge.net
Authors:
-------
Most of the code was written by Cesar Miquel <miquel@df.uba.ar>. Keith
Clayton <kclayton@jps.net> is incharge of the PPC port and making sure
things work there. Bruce Tenison <btenison@dibbs.net> is adding support
for .fon files and also does testing. The program will mostly sure be
re-written and Pete Ikusz along with the rest will re-design it. I would
also like to thank Tri Nguyen <tmn_3022000@hotmail.com> who provided use
with some important information regarding the communication with the Rio.
ADDITIONAL INFORMATION and Userspace tools
http://rio500.sourceforge.net/
REQUIREMENTS
A host with a USB port. Ideally, either a UHCI (Intel) or OHCI
(Compaq and others) hardware port should work.
A Linux development kernel (2.3.x) with USB support enabled or a
backported version to linux-2.2.x. See http://www.linux-usb.org for
more information on accomplishing this.
A Linux kernel with RIO 500 support enabled.
'lspci' which is only needed to determine the type of USB hardware
available in your machine.
CONFIGURATION
Using `lspci -v`, determine the type of USB hardware available.
If you see something like:
USB Controller: ......
Flags: .....
I/O ports at ....
Then you have a UHCI based controller.
If you see something like:
USB Controller: .....
Flags: ....
Memory at .....
Then you have a OHCI based controller.
Using `make menuconfig` or your preferred method for configuring the
kernel, select 'Support for USB', 'OHCI/UHCI' depending on your
hardware (determined from the steps above), 'USB Diamond Rio500 support', and
'Preliminary USB device filesystem'. Compile and install the modules
(you may need to execute `depmod -a` to update the module
dependencies).
Add a device for the USB rio500:
`mknod /dev/usb/rio500 c 180 64`
Set appropriate permissions for /dev/usb/rio500 (don't forget about
group and world permissions). Both read and write permissions are
required for proper operation.
Load the appropriate modules (if compiled as modules):
OHCI:
modprobe usbcore
modprobe usb-ohci
modprobe rio500
UHCI:
modprobe usbcore
modprobe usb-uhci (or uhci)
modprobe rio500
That's it. The Rio500 Utils at: http://rio500.sourceforge.net should
be able to access the rio500.
BUGS
If you encounter any problems feel free to drop me an email.
Bruce Tenison
btenison@dibbs.net

View File

@ -47,6 +47,8 @@ If PR_SPEC_PRCTL is set, then the per-task control of the mitigation is
available. If not set, prctl(PR_SET_SPECULATION_CTRL) for the speculation
misfeature will fail.
.. _set_spec_ctrl:
PR_SET_SPECULATION_CTRL
-----------------------

View File

@ -13,7 +13,7 @@ of a virtual machine. The ioctls belong to three classes
- VM ioctls: These query and set attributes that affect an entire virtual
machine, for example memory layout. In addition a VM ioctl is used to
create virtual cpus (vcpus).
create virtual cpus (vcpus) and devices.
Only run VM ioctls from the same process (address space) that was used
to create the VM.
@ -24,6 +24,11 @@ of a virtual machine. The ioctls belong to three classes
Only run vcpu ioctls from the same thread that was used to create the
vcpu.
- device ioctls: These query and set attributes that control the operation
of a single device.
device ioctls must be issued from the same process (address space) that
was used to create the VM.
2. File descriptors
-------------------
@ -32,10 +37,11 @@ The kvm API is centered around file descriptors. An initial
open("/dev/kvm") obtains a handle to the kvm subsystem; this handle
can be used to issue system ioctls. A KVM_CREATE_VM ioctl on this
handle will create a VM file descriptor which can be used to issue VM
ioctls. A KVM_CREATE_VCPU ioctl on a VM fd will create a virtual cpu
and return a file descriptor pointing to it. Finally, ioctls on a vcpu
fd can be used to control the vcpu, including the important task of
actually running guest code.
ioctls. A KVM_CREATE_VCPU or KVM_CREATE_DEVICE ioctl on a VM fd will
create a virtual cpu or device and return a file descriptor pointing to
the new resource. Finally, ioctls on a vcpu or device fd can be used
to control the vcpu or device. For vcpus, this includes the important
task of actually running guest code.
In general file descriptors can be migrated among processes by means
of fork() and the SCM_RIGHTS facility of unix domain socket. These

View File

@ -15,8 +15,6 @@ The acquisition orders for mutexes are as follows:
On x86, vcpu->mutex is taken outside kvm->arch.hyperv.hv_lock.
For spinlocks, kvm_lock is taken outside kvm->mmu_lock.
Everything else is a leaf: no other lock is taken inside the critical
sections.
@ -169,7 +167,7 @@ which time it will be set using the Dirty tracking mechanism described above.
------------
Name: kvm_lock
Type: spinlock_t
Type: mutex
Arch: any
Protects: - vm_list

10
Documentation/x86/conf.py Normal file
View File

@ -0,0 +1,10 @@
# -*- coding: utf-8; mode: python -*-
project = "X86 architecture specific documentation"
tags.add("subproject")
latex_documents = [
('index', 'x86.tex', project,
'The kernel development community', 'manual'),
]

View File

@ -0,0 +1,9 @@
==========================
x86 architecture specifics
==========================
.. toctree::
:maxdepth: 1
mds
tsx_async_abort

193
Documentation/x86/mds.rst Normal file
View File

@ -0,0 +1,193 @@
Microarchitectural Data Sampling (MDS) mitigation
=================================================
.. _mds:
Overview
--------
Microarchitectural Data Sampling (MDS) is a family of side channel attacks
on internal buffers in Intel CPUs. The variants are:
- Microarchitectural Store Buffer Data Sampling (MSBDS) (CVE-2018-12126)
- Microarchitectural Fill Buffer Data Sampling (MFBDS) (CVE-2018-12130)
- Microarchitectural Load Port Data Sampling (MLPDS) (CVE-2018-12127)
- Microarchitectural Data Sampling Uncacheable Memory (MDSUM) (CVE-2019-11091)
MSBDS leaks Store Buffer Entries which can be speculatively forwarded to a
dependent load (store-to-load forwarding) as an optimization. The forward
can also happen to a faulting or assisting load operation for a different
memory address, which can be exploited under certain conditions. Store
buffers are partitioned between Hyper-Threads so cross thread forwarding is
not possible. But if a thread enters or exits a sleep state the store
buffer is repartitioned which can expose data from one thread to the other.
MFBDS leaks Fill Buffer Entries. Fill buffers are used internally to manage
L1 miss situations and to hold data which is returned or sent in response
to a memory or I/O operation. Fill buffers can forward data to a load
operation and also write data to the cache. When the fill buffer is
deallocated it can retain the stale data of the preceding operations which
can then be forwarded to a faulting or assisting load operation, which can
be exploited under certain conditions. Fill buffers are shared between
Hyper-Threads so cross thread leakage is possible.
MLPDS leaks Load Port Data. Load ports are used to perform load operations
from memory or I/O. The received data is then forwarded to the register
file or a subsequent operation. In some implementations the Load Port can
contain stale data from a previous operation which can be forwarded to
faulting or assisting loads under certain conditions, which again can be
exploited eventually. Load ports are shared between Hyper-Threads so cross
thread leakage is possible.
MDSUM is a special case of MSBDS, MFBDS and MLPDS. An uncacheable load from
memory that takes a fault or assist can leave data in a microarchitectural
structure that may later be observed using one of the same methods used by
MSBDS, MFBDS or MLPDS.
Exposure assumptions
--------------------
It is assumed that attack code resides in user space or in a guest with one
exception. The rationale behind this assumption is that the code construct
needed for exploiting MDS requires:
- to control the load to trigger a fault or assist
- to have a disclosure gadget which exposes the speculatively accessed
data for consumption through a side channel.
- to control the pointer through which the disclosure gadget exposes the
data
The existence of such a construct in the kernel cannot be excluded with
100% certainty, but the complexity involved makes it extremly unlikely.
There is one exception, which is untrusted BPF. The functionality of
untrusted BPF is limited, but it needs to be thoroughly investigated
whether it can be used to create such a construct.
Mitigation strategy
-------------------
All variants have the same mitigation strategy at least for the single CPU
thread case (SMT off): Force the CPU to clear the affected buffers.
This is achieved by using the otherwise unused and obsolete VERW
instruction in combination with a microcode update. The microcode clears
the affected CPU buffers when the VERW instruction is executed.
For virtualization there are two ways to achieve CPU buffer
clearing. Either the modified VERW instruction or via the L1D Flush
command. The latter is issued when L1TF mitigation is enabled so the extra
VERW can be avoided. If the CPU is not affected by L1TF then VERW needs to
be issued.
If the VERW instruction with the supplied segment selector argument is
executed on a CPU without the microcode update there is no side effect
other than a small number of pointlessly wasted CPU cycles.
This does not protect against cross Hyper-Thread attacks except for MSBDS
which is only exploitable cross Hyper-thread when one of the Hyper-Threads
enters a C-state.
The kernel provides a function to invoke the buffer clearing:
mds_clear_cpu_buffers()
The mitigation is invoked on kernel/userspace, hypervisor/guest and C-state
(idle) transitions.
As a special quirk to address virtualization scenarios where the host has
the microcode updated, but the hypervisor does not (yet) expose the
MD_CLEAR CPUID bit to guests, the kernel issues the VERW instruction in the
hope that it might actually clear the buffers. The state is reflected
accordingly.
According to current knowledge additional mitigations inside the kernel
itself are not required because the necessary gadgets to expose the leaked
data cannot be controlled in a way which allows exploitation from malicious
user space or VM guests.
Kernel internal mitigation modes
--------------------------------
======= ============================================================
off Mitigation is disabled. Either the CPU is not affected or
mds=off is supplied on the kernel command line
full Mitigation is enabled. CPU is affected and MD_CLEAR is
advertised in CPUID.
vmwerv Mitigation is enabled. CPU is affected and MD_CLEAR is not
advertised in CPUID. That is mainly for virtualization
scenarios where the host has the updated microcode but the
hypervisor does not expose MD_CLEAR in CPUID. It's a best
effort approach without guarantee.
======= ============================================================
If the CPU is affected and mds=off is not supplied on the kernel command
line then the kernel selects the appropriate mitigation mode depending on
the availability of the MD_CLEAR CPUID bit.
Mitigation points
-----------------
1. Return to user space
^^^^^^^^^^^^^^^^^^^^^^^
When transitioning from kernel to user space the CPU buffers are flushed
on affected CPUs when the mitigation is not disabled on the kernel
command line. The migitation is enabled through the static key
mds_user_clear.
The mitigation is invoked in prepare_exit_to_usermode() which covers
all but one of the kernel to user space transitions. The exception
is when we return from a Non Maskable Interrupt (NMI), which is
handled directly in do_nmi().
(The reason that NMI is special is that prepare_exit_to_usermode() can
enable IRQs. In NMI context, NMIs are blocked, and we don't want to
enable IRQs with NMIs blocked.)
2. C-State transition
^^^^^^^^^^^^^^^^^^^^^
When a CPU goes idle and enters a C-State the CPU buffers need to be
cleared on affected CPUs when SMT is active. This addresses the
repartitioning of the store buffer when one of the Hyper-Threads enters
a C-State.
When SMT is inactive, i.e. either the CPU does not support it or all
sibling threads are offline CPU buffer clearing is not required.
The idle clearing is enabled on CPUs which are only affected by MSBDS
and not by any other MDS variant. The other MDS variants cannot be
protected against cross Hyper-Thread attacks because the Fill Buffer and
the Load Ports are shared. So on CPUs affected by other variants, the
idle clearing would be a window dressing exercise and is therefore not
activated.
The invocation is controlled by the static key mds_idle_clear which is
switched depending on the chosen mitigation mode and the SMT state of
the system.
The buffer clear is only invoked before entering the C-State to prevent
that stale data from the idling CPU from spilling to the Hyper-Thread
sibling after the store buffer got repartitioned and all entries are
available to the non idle sibling.
When coming out of idle the store buffer is partitioned again so each
sibling has half of it available. The back from idle CPU could be then
speculatively exposed to contents of the sibling. The buffers are
flushed either on exit to user space or on VMENTER so malicious code
in user space or the guest cannot speculatively access them.
The mitigation is hooked into all variants of halt()/mwait(), but does
not cover the legacy ACPI IO-Port mechanism because the ACPI idle driver
has been superseded by the intel_idle driver around 2010 and is
preferred on all affected CPUs which are expected to gain the MD_CLEAR
functionality in microcode. Aside of that the IO-Port mechanism is a
legacy interface which is only used on older systems which are either
not affected or do not receive microcode updates anymore.

View File

@ -0,0 +1,117 @@
.. SPDX-License-Identifier: GPL-2.0
TSX Async Abort (TAA) mitigation
================================
.. _tsx_async_abort:
Overview
--------
TSX Async Abort (TAA) is a side channel attack on internal buffers in some
Intel processors similar to Microachitectural Data Sampling (MDS). In this
case certain loads may speculatively pass invalid data to dependent operations
when an asynchronous abort condition is pending in a Transactional
Synchronization Extensions (TSX) transaction. This includes loads with no
fault or assist condition. Such loads may speculatively expose stale data from
the same uarch data structures as in MDS, with same scope of exposure i.e.
same-thread and cross-thread. This issue affects all current processors that
support TSX.
Mitigation strategy
-------------------
a) TSX disable - one of the mitigations is to disable TSX. A new MSR
IA32_TSX_CTRL will be available in future and current processors after
microcode update which can be used to disable TSX. In addition, it
controls the enumeration of the TSX feature bits (RTM and HLE) in CPUID.
b) Clear CPU buffers - similar to MDS, clearing the CPU buffers mitigates this
vulnerability. More details on this approach can be found in
:ref:`Documentation/admin-guide/hw-vuln/mds.rst <mds>`.
Kernel internal mitigation modes
--------------------------------
============= ============================================================
off Mitigation is disabled. Either the CPU is not affected or
tsx_async_abort=off is supplied on the kernel command line.
tsx disabled Mitigation is enabled. TSX feature is disabled by default at
bootup on processors that support TSX control.
verw Mitigation is enabled. CPU is affected and MD_CLEAR is
advertised in CPUID.
ucode needed Mitigation is enabled. CPU is affected and MD_CLEAR is not
advertised in CPUID. That is mainly for virtualization
scenarios where the host has the updated microcode but the
hypervisor does not expose MD_CLEAR in CPUID. It's a best
effort approach without guarantee.
============= ============================================================
If the CPU is affected and the "tsx_async_abort" kernel command line parameter is
not provided then the kernel selects an appropriate mitigation depending on the
status of RTM and MD_CLEAR CPUID bits.
Below tables indicate the impact of tsx=on|off|auto cmdline options on state of
TAA mitigation, VERW behavior and TSX feature for various combinations of
MSR_IA32_ARCH_CAPABILITIES bits.
1. "tsx=off"
========= ========= ============ ============ ============== =================== ======================
MSR_IA32_ARCH_CAPABILITIES bits Result with cmdline tsx=off
---------------------------------- -------------------------------------------------------------------------
TAA_NO MDS_NO TSX_CTRL_MSR TSX state VERW can clear TAA mitigation TAA mitigation
after bootup CPU buffers tsx_async_abort=off tsx_async_abort=full
========= ========= ============ ============ ============== =================== ======================
0 0 0 HW default Yes Same as MDS Same as MDS
0 0 1 Invalid case Invalid case Invalid case Invalid case
0 1 0 HW default No Need ucode update Need ucode update
0 1 1 Disabled Yes TSX disabled TSX disabled
1 X 1 Disabled X None needed None needed
========= ========= ============ ============ ============== =================== ======================
2. "tsx=on"
========= ========= ============ ============ ============== =================== ======================
MSR_IA32_ARCH_CAPABILITIES bits Result with cmdline tsx=on
---------------------------------- -------------------------------------------------------------------------
TAA_NO MDS_NO TSX_CTRL_MSR TSX state VERW can clear TAA mitigation TAA mitigation
after bootup CPU buffers tsx_async_abort=off tsx_async_abort=full
========= ========= ============ ============ ============== =================== ======================
0 0 0 HW default Yes Same as MDS Same as MDS
0 0 1 Invalid case Invalid case Invalid case Invalid case
0 1 0 HW default No Need ucode update Need ucode update
0 1 1 Enabled Yes None Same as MDS
1 X 1 Enabled X None needed None needed
========= ========= ============ ============ ============== =================== ======================
3. "tsx=auto"
========= ========= ============ ============ ============== =================== ======================
MSR_IA32_ARCH_CAPABILITIES bits Result with cmdline tsx=auto
---------------------------------- -------------------------------------------------------------------------
TAA_NO MDS_NO TSX_CTRL_MSR TSX state VERW can clear TAA mitigation TAA mitigation
after bootup CPU buffers tsx_async_abort=off tsx_async_abort=full
========= ========= ============ ============ ============== =================== ======================
0 0 0 HW default Yes Same as MDS Same as MDS
0 0 1 Invalid case Invalid case Invalid case Invalid case
0 1 0 HW default No Need ucode update Need ucode update
0 1 1 Disabled Yes TSX disabled TSX disabled
1 X 1 Enabled X None needed None needed
========= ========= ============ ============ ============== =================== ======================
In the tables, TSX_CTRL_MSR is a new bit in MSR_IA32_ARCH_CAPABILITIES that
indicates whether MSR_IA32_TSX_CTRL is supported.
There are two control bits in IA32_TSX_CTRL MSR:
Bit 0: When set it disables the Restricted Transactional Memory (RTM)
sub-feature of TSX (will force all transactions to abort on the
XBEGIN instruction).
Bit 1: When set it disables the enumeration of the RTM and HLE feature
(i.e. it will make CPUID(EAX=7).EBX{bit4} and
CPUID(EAX=7).EBX{bit11} read as 0).

View File

@ -13894,13 +13894,6 @@ W: http://www.linux-usb.org/usbnet
S: Maintained
F: drivers/net/usb/dm9601.c
USB DIAMOND RIO500 DRIVER
M: Cesar Miquel <miquel@df.uba.ar>
L: rio500-users@lists.sourceforge.net
W: http://rio500.sourceforge.net
S: Maintained
F: drivers/usb/misc/rio500*
USB EHCI DRIVER
M: Alan Stern <stern@rowland.harvard.edu>
L: linux-usb@vger.kernel.org

View File

@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
VERSION = 4
PATCHLEVEL = 14
SUBLEVEL = 98
SUBLEVEL = 164
EXTRAVERSION =
NAME = Petit Gorille
@ -427,6 +427,7 @@ KBUILD_AFLAGS_MODULE := -DMODULE
KBUILD_CFLAGS_MODULE := -DMODULE
KBUILD_LDFLAGS_MODULE := -T $(srctree)/scripts/module-common.lds
GCC_PLUGINS_CFLAGS :=
CLANG_FLAGS :=
export ARCH SRCARCH CONFIG_SHELL HOSTCC HOSTCFLAGS CROSS_COMPILE AS LD CC
export CPP AR NM STRIP OBJCOPY OBJDUMP HOSTLDFLAGS HOST_LOADLIBES
@ -479,8 +480,8 @@ endif
ifeq ($(cc-name),clang)
ifneq ($(CROSS_COMPILE),)
CLANG_FLAGS := --target=$(notdir $(CROSS_COMPILE:%-=%))
GCC_TOOLCHAIN_DIR := $(dir $(shell which $(LD)))
CLANG_FLAGS += --target=$(notdir $(CROSS_COMPILE:%-=%))
GCC_TOOLCHAIN_DIR := $(dir $(shell which $(CROSS_COMPILE)elfedit))
CLANG_FLAGS += --prefix=$(GCC_TOOLCHAIN_DIR)
GCC_TOOLCHAIN := $(realpath $(GCC_TOOLCHAIN_DIR)/..)
endif
@ -488,6 +489,7 @@ ifneq ($(GCC_TOOLCHAIN),)
CLANG_FLAGS += --gcc-toolchain=$(GCC_TOOLCHAIN)
endif
CLANG_FLAGS += -no-integrated-as
CLANG_FLAGS += -Werror=unknown-warning-option
KBUILD_CFLAGS += $(CLANG_FLAGS)
KBUILD_AFLAGS += $(CLANG_FLAGS)
export CLANG_FLAGS
@ -650,11 +652,11 @@ KBUILD_CFLAGS += $(call cc-disable-warning,frame-address,)
KBUILD_CFLAGS += $(call cc-disable-warning, format-truncation)
KBUILD_CFLAGS += $(call cc-disable-warning, format-overflow)
KBUILD_CFLAGS += $(call cc-disable-warning, int-in-bool-context)
KBUILD_CFLAGS += $(call cc-disable-warning, address-of-packed-member)
KBUILD_CFLAGS += $(call cc-disable-warning, attribute-alias)
ifdef CONFIG_CC_OPTIMIZE_FOR_SIZE
KBUILD_CFLAGS += $(call cc-option,-Oz,-Os)
KBUILD_CFLAGS += $(call cc-disable-warning,maybe-uninitialized,)
KBUILD_CFLAGS += -Os $(call cc-disable-warning,maybe-uninitialized,)
else
ifdef CONFIG_PROFILE_ALL_BRANCHES
KBUILD_CFLAGS += -O2 $(call cc-disable-warning,maybe-uninitialized,)
@ -717,7 +719,6 @@ ifeq ($(cc-name),clang)
KBUILD_CPPFLAGS += $(call cc-option,-Qunused-arguments,)
KBUILD_CFLAGS += $(call cc-disable-warning, format-invalid-specifier)
KBUILD_CFLAGS += $(call cc-disable-warning, gnu)
KBUILD_CFLAGS += $(call cc-disable-warning, address-of-packed-member)
# Quiet clang warning: comparison of unsigned expression < 0 is always false
KBUILD_CFLAGS += $(call cc-disable-warning, tautological-compare)
# CLANG uses a _MergedGlobals as optimization, but this breaks modpost, as the
@ -839,6 +840,15 @@ KBUILD_CFLAGS += $(call cc-option,-Werror=incompatible-pointer-types)
# Require designated initializers for all marked structures
KBUILD_CFLAGS += $(call cc-option,-Werror=designated-init)
# change __FILE__ to the relative path from the srctree
KBUILD_CFLAGS += $(call cc-option,-fmacro-prefix-map=$(srctree)/=)
# ensure -fcf-protection is disabled when using retpoline as it is
# incompatible with -mindirect-branch=thunk-extern
ifdef CONFIG_RETPOLINE
KBUILD_CFLAGS += $(call cc-option,-fcf-protection=none)
endif
# use the deterministic mode of AR if available
KBUILD_ARFLAGS := $(call ar-option,D)
@ -948,9 +958,11 @@ mod_sign_cmd = true
endif
export mod_sign_cmd
HOST_LIBELF_LIBS = $(shell pkg-config libelf --libs 2>/dev/null || echo -lelf)
ifdef CONFIG_STACK_VALIDATION
has_libelf := $(call try-run,\
echo "int main() {}" | $(HOSTCC) -xc -o /dev/null -lelf -,1,0)
echo "int main() {}" | $(HOSTCC) -xc -o /dev/null $(HOST_LIBELF_LIBS) -,1,0)
ifeq ($(has_libelf),1)
objtool_target := tools/objtool FORCE
else
@ -1517,9 +1529,6 @@ else # KBUILD_EXTMOD
# We are always building modules
KBUILD_MODULES := 1
PHONY += crmodverdir
crmodverdir:
$(cmd_crmodverdir)
PHONY += $(objtree)/Module.symvers
$(objtree)/Module.symvers:
@ -1531,7 +1540,7 @@ $(objtree)/Module.symvers:
module-dirs := $(addprefix _module_,$(KBUILD_EXTMOD))
PHONY += $(module-dirs) modules
$(module-dirs): crmodverdir $(objtree)/Module.symvers
$(module-dirs): prepare $(objtree)/Module.symvers
$(Q)$(MAKE) $(build)=$(patsubst _module_%,%,$@)
modules: $(module-dirs)
@ -1572,7 +1581,8 @@ help:
# Dummies...
PHONY += prepare scripts
prepare: ;
prepare:
$(cmd_crmodverdir)
scripts: ;
endif # KBUILD_EXTMOD
@ -1697,17 +1707,14 @@ endif
# Modules
/: prepare scripts FORCE
$(cmd_crmodverdir)
$(Q)$(MAKE) KBUILD_MODULES=$(if $(CONFIG_MODULES),1) \
$(build)=$(build-dir)
# Make sure the latest headers are built for Documentation
Documentation/ samples/: headers_install
%/: prepare scripts FORCE
$(cmd_crmodverdir)
$(Q)$(MAKE) KBUILD_MODULES=$(if $(CONFIG_MODULES),1) \
$(build)=$(build-dir)
%.ko: prepare scripts FORCE
$(cmd_crmodverdir)
$(Q)$(MAKE) KBUILD_MODULES=$(if $(CONFIG_MODULES),1) \
$(build)=$(build-dir) $(@:.ko=.o)
$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.modpost

View File

@ -56,15 +56,15 @@
#elif defined(CONFIG_ALPHA_DP264) || \
defined(CONFIG_ALPHA_LYNX) || \
defined(CONFIG_ALPHA_SHARK) || \
defined(CONFIG_ALPHA_EIGER)
defined(CONFIG_ALPHA_SHARK)
# define NR_IRQS 64
#elif defined(CONFIG_ALPHA_TITAN)
#define NR_IRQS 80
#elif defined(CONFIG_ALPHA_RAWHIDE) || \
defined(CONFIG_ALPHA_TAKARA)
defined(CONFIG_ALPHA_TAKARA) || \
defined(CONFIG_ALPHA_EIGER)
# define NR_IRQS 128
#elif defined(CONFIG_ALPHA_WILDFIRE)

View File

@ -78,7 +78,7 @@ __load_new_mm_context(struct mm_struct *next_mm)
/* Macro for exception fixup code to access integer registers. */
#define dpf_reg(r) \
(((unsigned long *)regs)[(r) <= 8 ? (r) : (r) <= 15 ? (r)-16 : \
(r) <= 18 ? (r)+8 : (r)-10])
(r) <= 18 ? (r)+10 : (r)-10])
asmlinkage void
do_page_fault(unsigned long address, unsigned long mmcsr,

View File

@ -417,6 +417,14 @@ config ARC_HAS_ACCL_REGS
(also referred to as r58:r59). These can also be used by gcc as GPR so
kernel needs to save/restore per process
config ARC_IRQ_NO_AUTOSAVE
bool "Disable hardware autosave regfile on interrupts"
default n
help
On HS cores, taken interrupt auto saves the regfile on stack.
This is programmable and can be optionally disabled in which case
software INTERRUPT_PROLOGUE/EPILGUE do the needed work
endif # ISA_ARCV2
endmenu # "ARC CPU Configuration"

View File

@ -163,12 +163,16 @@
interrupt-names = "macirq";
phy-mode = "rgmii";
snps,pbl = <32>;
snps,multicast-filter-bins = <256>;
clocks = <&gmacclk>;
clock-names = "stmmaceth";
phy-handle = <&phy0>;
resets = <&cgu_rst HSDK_ETH_RESET>;
reset-names = "stmmaceth";
tx-fifo-depth = <4096>;
rx-fifo-depth = <4096>;
mdio {
#address-cells = <1>;
#size-cells = <0>;

View File

@ -9,6 +9,7 @@ CONFIG_NAMESPACES=y
# CONFIG_UTS_NS is not set
# CONFIG_PID_NS is not set
CONFIG_BLK_DEV_INITRD=y
CONFIG_BLK_DEV_RAM=y
CONFIG_EMBEDDED=y
CONFIG_PERF_EVENTS=y
# CONFIG_VM_EVENT_COUNTERS is not set

View File

@ -340,7 +340,7 @@ static inline __attribute__ ((const)) int __fls(unsigned long x)
/*
* __ffs: Similar to ffs, but zero based (0-31)
*/
static inline __attribute__ ((const)) int __ffs(unsigned long word)
static inline __attribute__ ((const)) unsigned long __ffs(unsigned long word)
{
if (!word)
return word;
@ -400,9 +400,9 @@ static inline __attribute__ ((const)) int ffs(unsigned long x)
/*
* __ffs: Similar to ffs, but zero based (0-31)
*/
static inline __attribute__ ((const)) int __ffs(unsigned long x)
static inline __attribute__ ((const)) unsigned long __ffs(unsigned long x)
{
int n;
unsigned long n;
asm volatile(
" ffs.f %0, %1 \n" /* 0:31; 31(Z) if src 0 */

View File

@ -52,6 +52,17 @@
#define cache_line_size() SMP_CACHE_BYTES
#define ARCH_DMA_MINALIGN SMP_CACHE_BYTES
/*
* Make sure slab-allocated buffers are 64-bit aligned when atomic64_t uses
* ARCv2 64-bit atomics (LLOCKD/SCONDD). This guarantess runtime 64-bit
* alignment for any atomic64_t embedded in buffer.
* Default ARCH_SLAB_MINALIGN is __alignof__(long long) which has a relaxed
* value of 4 (and not 8) in ARC ABI.
*/
#if defined(CONFIG_ARC_HAS_LL64) && defined(CONFIG_ARC_HAS_LLSC)
#define ARCH_SLAB_MINALIGN 8
#endif
extern void arc_cache_init(void);
extern char *arc_cache_mumbojumbo(int cpu_id, char *buf, int len);
extern void read_decode_cache_bcr(void);

View File

@ -92,8 +92,11 @@ __cmpxchg(volatile void *ptr, unsigned long expected, unsigned long new)
#endif /* CONFIG_ARC_HAS_LLSC */
#define cmpxchg(ptr, o, n) ((typeof(*(ptr)))__cmpxchg((ptr), \
(unsigned long)(o), (unsigned long)(n)))
#define cmpxchg(ptr, o, n) ({ \
(typeof(*(ptr)))__cmpxchg((ptr), \
(unsigned long)(o), \
(unsigned long)(n)); \
})
/*
* atomic_cmpxchg is same as cmpxchg
@ -198,8 +201,11 @@ static inline unsigned long __xchg(unsigned long val, volatile void *ptr,
return __xchg_bad_pointer();
}
#define xchg(ptr, with) ((typeof(*(ptr)))__xchg((unsigned long)(with), (ptr), \
sizeof(*(ptr))))
#define xchg(ptr, with) ({ \
(typeof(*(ptr)))__xchg((unsigned long)(with), \
(ptr), \
sizeof(*(ptr))); \
})
#endif /* CONFIG_ARC_PLAT_EZNPS */

View File

@ -17,6 +17,33 @@
;
; Now manually save: r12, sp, fp, gp, r25
#ifdef CONFIG_ARC_IRQ_NO_AUTOSAVE
.ifnc \called_from, exception
st.as r9, [sp, -10] ; save r9 in it's final stack slot
sub sp, sp, 12 ; skip JLI, LDI, EI
PUSH lp_count
PUSHAX lp_start
PUSHAX lp_end
PUSH blink
PUSH r11
PUSH r10
sub sp, sp, 4 ; skip r9
PUSH r8
PUSH r7
PUSH r6
PUSH r5
PUSH r4
PUSH r3
PUSH r2
PUSH r1
PUSH r0
.endif
#endif
#ifdef CONFIG_ARC_HAS_ACCL_REGS
PUSH r59
PUSH r58
@ -86,6 +113,33 @@
POP r59
#endif
#ifdef CONFIG_ARC_IRQ_NO_AUTOSAVE
.ifnc \called_from, exception
POP r0
POP r1
POP r2
POP r3
POP r4
POP r5
POP r6
POP r7
POP r8
POP r9
POP r10
POP r11
POP blink
POPAX lp_end
POPAX lp_start
POP r9
mov lp_count, r9
add sp, sp, 12 ; skip JLI, LDI, EI
ld.as r9, [sp, -10] ; reload r9 which got clobbered
.endif
#endif
.endm
/*------------------------------------------------------------------------*/

View File

@ -207,7 +207,7 @@ raw_copy_from_user(void *to, const void __user *from, unsigned long n)
*/
"=&r" (tmp), "+r" (to), "+r" (from)
:
: "lp_count", "lp_start", "lp_end", "memory");
: "lp_count", "memory");
return n;
}
@ -433,7 +433,7 @@ raw_copy_to_user(void __user *to, const void *from, unsigned long n)
*/
"=&r" (tmp), "+r" (to), "+r" (from)
:
: "lp_count", "lp_start", "lp_end", "memory");
: "lp_count", "memory");
return n;
}
@ -653,7 +653,7 @@ static inline unsigned long __arc_clear_user(void __user *to, unsigned long n)
" .previous \n"
: "+r"(d_char), "+r"(res)
: "i"(0)
: "lp_count", "lp_start", "lp_end", "memory");
: "lp_count", "memory");
return res;
}
@ -686,7 +686,7 @@ __arc_strncpy_from_user(char *dst, const char __user *src, long count)
" .previous \n"
: "+r"(res), "+r"(dst), "+r"(src), "=r"(val)
: "g"(-EFAULT), "r"(count)
: "lp_count", "lp_start", "lp_end", "memory");
: "lp_count", "memory");
return res;
}

View File

@ -209,7 +209,9 @@ restore_regs:
;####### Return from Intr #######
debug_marker_l1:
bbit1.nt r0, STATUS_DE_BIT, .Lintr_ret_to_delay_slot
; bbit1.nt r0, STATUS_DE_BIT, .Lintr_ret_to_delay_slot
btst r0, STATUS_DE_BIT ; Z flag set if bit clear
bnz .Lintr_ret_to_delay_slot ; branch if STATUS_DE_BIT set
.Lisr_ret_fast_path:
; Handle special case #1: (Entry via Exception, Return via IRQ)

View File

@ -17,6 +17,7 @@
#include <asm/entry.h>
#include <asm/arcregs.h>
#include <asm/cache.h>
#include <asm/irqflags.h>
.macro CPU_EARLY_SETUP
@ -47,6 +48,15 @@
sr r5, [ARC_REG_DC_CTRL]
1:
#ifdef CONFIG_ISA_ARCV2
; Unaligned access is disabled at reset, so re-enable early as
; gcc 7.3.1 (ARC GNU 2018.03) onwards generates unaligned access
; by default
lr r5, [status32]
bset r5, r5, STATUS_AD_BIT
kflag r5
#endif
.endm
.section .init.text, "ax",@progbits
@ -93,10 +103,11 @@ ENTRY(stext)
#ifdef CONFIG_ARC_UBOOT_SUPPORT
; Uboot - kernel ABI
; r0 = [0] No uboot interaction, [1] cmdline in r2, [2] DTB in r2
; r1 = magic number (board identity, unused as of now
; r1 = magic number (always zero as of now)
; r2 = pointer to uboot provided cmdline or external DTB in mem
; These are handled later in setup_arch()
; These are handled later in handle_uboot_args()
st r0, [@uboot_tag]
st r1, [@uboot_magic]
st r2, [@uboot_arg]
#endif

View File

@ -49,11 +49,13 @@ void arc_init_IRQ(void)
*(unsigned int *)&ictrl = 0;
#ifndef CONFIG_ARC_IRQ_NO_AUTOSAVE
ictrl.save_nr_gpr_pairs = 6; /* r0 to r11 (r12 saved manually) */
ictrl.save_blink = 1;
ictrl.save_lp_regs = 1; /* LP_COUNT, LP_START, LP_END */
ictrl.save_u_to_u = 0; /* user ctxt saved on kernel stack */
ictrl.save_idx_regs = 1; /* JLI, LDI, EI */
#endif
WRITE_AUX(AUX_IRQ_CTRL, ictrl);

View File

@ -488,8 +488,8 @@ static int arc_pmu_device_probe(struct platform_device *pdev)
/* loop thru all available h/w condition indexes */
for (j = 0; j < cc_bcr.c; j++) {
write_aux_reg(ARC_REG_CC_INDEX, j);
cc_name.indiv.word0 = read_aux_reg(ARC_REG_CC_NAME0);
cc_name.indiv.word1 = read_aux_reg(ARC_REG_CC_NAME1);
cc_name.indiv.word0 = le32_to_cpu(read_aux_reg(ARC_REG_CC_NAME0));
cc_name.indiv.word1 = le32_to_cpu(read_aux_reg(ARC_REG_CC_NAME1));
/* See if it has been mapped to a perf event_id */
for (i = 0; i < ARRAY_SIZE(arc_pmu_ev_hw_map); i++) {

View File

@ -35,6 +35,7 @@ unsigned int intr_to_DE_cnt;
/* Part of U-boot ABI: see head.S */
int __initdata uboot_tag;
int __initdata uboot_magic;
char __initdata *uboot_arg;
const struct machine_desc *machine_desc;
@ -414,43 +415,87 @@ void setup_processor(void)
arc_chk_core_config();
}
static inline int is_kernel(unsigned long addr)
static inline bool uboot_arg_invalid(unsigned long addr)
{
if (addr >= (unsigned long)_stext && addr <= (unsigned long)_end)
return 1;
return 0;
/*
* Check that it is a untranslated address (although MMU is not enabled
* yet, it being a high address ensures this is not by fluke)
*/
if (addr < PAGE_OFFSET)
return true;
/* Check that address doesn't clobber resident kernel image */
return addr >= (unsigned long)_stext && addr <= (unsigned long)_end;
}
#define IGNORE_ARGS "Ignore U-boot args: "
/* uboot_tag values for U-boot - kernel ABI revision 0; see head.S */
#define UBOOT_TAG_NONE 0
#define UBOOT_TAG_CMDLINE 1
#define UBOOT_TAG_DTB 2
/* We always pass 0 as magic from U-boot */
#define UBOOT_MAGIC_VALUE 0
void __init handle_uboot_args(void)
{
bool use_embedded_dtb = true;
bool append_cmdline = false;
#ifdef CONFIG_ARC_UBOOT_SUPPORT
/* check that we know this tag */
if (uboot_tag != UBOOT_TAG_NONE &&
uboot_tag != UBOOT_TAG_CMDLINE &&
uboot_tag != UBOOT_TAG_DTB) {
pr_warn(IGNORE_ARGS "invalid uboot tag: '%08x'\n", uboot_tag);
goto ignore_uboot_args;
}
if (uboot_magic != UBOOT_MAGIC_VALUE) {
pr_warn(IGNORE_ARGS "non zero uboot magic\n");
goto ignore_uboot_args;
}
if (uboot_tag != UBOOT_TAG_NONE &&
uboot_arg_invalid((unsigned long)uboot_arg)) {
pr_warn(IGNORE_ARGS "invalid uboot arg: '%px'\n", uboot_arg);
goto ignore_uboot_args;
}
/* see if U-boot passed an external Device Tree blob */
if (uboot_tag == UBOOT_TAG_DTB) {
machine_desc = setup_machine_fdt((void *)uboot_arg);
/* external Device Tree blob is invalid - use embedded one */
use_embedded_dtb = !machine_desc;
}
if (uboot_tag == UBOOT_TAG_CMDLINE)
append_cmdline = true;
ignore_uboot_args:
#endif
if (use_embedded_dtb) {
machine_desc = setup_machine_fdt(__dtb_start);
if (!machine_desc)
panic("Embedded DT invalid\n");
}
/*
* NOTE: @boot_command_line is populated by setup_machine_fdt() so this
* append processing can only happen after.
*/
if (append_cmdline) {
/* Ensure a whitespace between the 2 cmdlines */
strlcat(boot_command_line, " ", COMMAND_LINE_SIZE);
strlcat(boot_command_line, uboot_arg, COMMAND_LINE_SIZE);
}
}
void __init setup_arch(char **cmdline_p)
{
#ifdef CONFIG_ARC_UBOOT_SUPPORT
/* make sure that uboot passed pointer to cmdline/dtb is valid */
if (uboot_tag && is_kernel((unsigned long)uboot_arg))
panic("Invalid uboot arg\n");
/* See if u-boot passed an external Device Tree blob */
machine_desc = setup_machine_fdt(uboot_arg); /* uboot_tag == 2 */
if (!machine_desc)
#endif
{
/* No, so try the embedded one */
machine_desc = setup_machine_fdt(__dtb_start);
if (!machine_desc)
panic("Embedded DT invalid\n");
/*
* If we are here, it is established that @uboot_arg didn't
* point to DT blob. Instead if u-boot says it is cmdline,
* append to embedded DT cmdline.
* setup_machine_fdt() would have populated @boot_command_line
*/
if (uboot_tag == 1) {
/* Ensure a whitespace between the 2 cmdlines */
strlcat(boot_command_line, " ", COMMAND_LINE_SIZE);
strlcat(boot_command_line, uboot_arg,
COMMAND_LINE_SIZE);
}
}
handle_uboot_args();
/* Save unparsed command line copy for /proc/cmdline */
*cmdline_p = boot_command_line;

View File

@ -155,3 +155,11 @@ void do_insterror_or_kprobe(unsigned long address, struct pt_regs *regs)
insterror_is_error(address, regs);
}
/*
* abort() call generated by older gcc for __builtin_trap()
*/
void abort(void)
{
__asm__ __volatile__("trap_s 5\n");
}

View File

@ -185,11 +185,6 @@ static void *__init unw_hdr_alloc_early(unsigned long sz)
MAX_DMA_ADDRESS);
}
static void *unw_hdr_alloc(unsigned long sz)
{
return kmalloc(sz, GFP_KERNEL);
}
static void init_unwind_table(struct unwind_table *table, const char *name,
const void *core_start, unsigned long core_size,
const void *init_start, unsigned long init_size,
@ -370,6 +365,10 @@ ret_err:
}
#ifdef CONFIG_MODULES
static void *unw_hdr_alloc(unsigned long sz)
{
return kmalloc(sz, GFP_KERNEL);
}
static struct unwind_table *last_table;

View File

@ -25,15 +25,11 @@
#endif
#ifdef CONFIG_ARC_HAS_LL64
# define PREFETCH_READ(RX) prefetch [RX, 56]
# define PREFETCH_WRITE(RX) prefetchw [RX, 64]
# define LOADX(DST,RX) ldd.ab DST, [RX, 8]
# define STOREX(SRC,RX) std.ab SRC, [RX, 8]
# define ZOLSHFT 5
# define ZOLAND 0x1F
#else
# define PREFETCH_READ(RX) prefetch [RX, 28]
# define PREFETCH_WRITE(RX) prefetchw [RX, 32]
# define LOADX(DST,RX) ld.ab DST, [RX, 4]
# define STOREX(SRC,RX) st.ab SRC, [RX, 4]
# define ZOLSHFT 4
@ -41,8 +37,6 @@
#endif
ENTRY_CFI(memcpy)
prefetch [r1] ; Prefetch the read location
prefetchw [r0] ; Prefetch the write location
mov.f 0, r2
;;; if size is zero
jz.d [blink]
@ -72,8 +66,6 @@ ENTRY_CFI(memcpy)
lpnz @.Lcopy32_64bytes
;; LOOP START
LOADX (r6, r1)
PREFETCH_READ (r1)
PREFETCH_WRITE (r3)
LOADX (r8, r1)
LOADX (r10, r1)
LOADX (r4, r1)
@ -117,9 +109,7 @@ ENTRY_CFI(memcpy)
lpnz @.Lcopy8bytes_1
;; LOOP START
ld.ab r6, [r1, 4]
prefetch [r1, 28] ;Prefetch the next read location
ld.ab r8, [r1,4]
prefetchw [r3, 32] ;Prefetch the next write location
SHIFT_1 (r7, r6, 24)
or r7, r7, r5
@ -162,9 +152,7 @@ ENTRY_CFI(memcpy)
lpnz @.Lcopy8bytes_2
;; LOOP START
ld.ab r6, [r1, 4]
prefetch [r1, 28] ;Prefetch the next read location
ld.ab r8, [r1,4]
prefetchw [r3, 32] ;Prefetch the next write location
SHIFT_1 (r7, r6, 16)
or r7, r7, r5
@ -204,9 +192,7 @@ ENTRY_CFI(memcpy)
lpnz @.Lcopy8bytes_3
;; LOOP START
ld.ab r6, [r1, 4]
prefetch [r1, 28] ;Prefetch the next read location
ld.ab r8, [r1,4]
prefetchw [r3, 32] ;Prefetch the next write location
SHIFT_1 (r7, r6, 8)
or r7, r7, r5

View File

@ -902,9 +902,11 @@ void do_tlb_overlap_fault(unsigned long cause, unsigned long address,
struct pt_regs *regs)
{
struct cpuinfo_arc_mmu *mmu = &cpuinfo_arc700[smp_processor_id()].mmu;
unsigned int pd0[mmu->ways];
unsigned long flags;
int set;
int set, n_ways = mmu->ways;
n_ways = min(n_ways, 4);
BUG_ON(mmu->ways > 4);
local_irq_save(flags);
@ -912,9 +914,10 @@ void do_tlb_overlap_fault(unsigned long cause, unsigned long address,
for (set = 0; set < mmu->sets; set++) {
int is_valid, way;
unsigned int pd0[4];
/* read out all the ways of current set */
for (way = 0, is_valid = 0; way < mmu->ways; way++) {
for (way = 0, is_valid = 0; way < n_ways; way++) {
write_aux_reg(ARC_REG_TLBINDEX,
SET_WAY_TO_IDX(mmu, set, way));
write_aux_reg(ARC_REG_TLBCOMMAND, TLBRead);
@ -928,14 +931,14 @@ void do_tlb_overlap_fault(unsigned long cause, unsigned long address,
continue;
/* Scan the set for duplicate ways: needs a nested loop */
for (way = 0; way < mmu->ways - 1; way++) {
for (way = 0; way < n_ways - 1; way++) {
int n;
if (!pd0[way])
continue;
for (n = way + 1; n < mmu->ways; n++) {
for (n = way + 1; n < n_ways; n++) {
if (pd0[way] != pd0[n])
continue;

View File

@ -9,5 +9,6 @@ menuconfig ARC_SOC_HSDK
bool "ARC HS Development Kit SOC"
depends on ISA_ARCV2
select ARC_HAS_ACCL_REGS
select ARC_IRQ_NO_AUTOSAVE
select CLK_HSDK
select RESET_HSDK

View File

@ -1457,6 +1457,7 @@ config NR_CPUS
config HOTPLUG_CPU
bool "Support for hot-pluggable CPUs"
depends on SMP
select GENERIC_IRQ_MIGRATION
help
Say Y here to experiment with turning CPUs off and on. CPUs
can be controlled through /sys/devices/system/cpu.

View File

@ -1030,14 +1030,21 @@ choice
Say Y here if you want kernel low-level debugging support
on SOCFPGA(Cyclone 5 and Arria 5) based platforms.
config DEBUG_SOCFPGA_UART1
config DEBUG_SOCFPGA_ARRIA10_UART1
depends on ARCH_SOCFPGA
bool "Use SOCFPGA UART1 for low-level debug"
bool "Use SOCFPGA Arria10 UART1 for low-level debug"
select DEBUG_UART_8250
help
Say Y here if you want kernel low-level debugging support
on SOCFPGA(Arria 10) based platforms.
config DEBUG_SOCFPGA_CYCLONE5_UART1
depends on ARCH_SOCFPGA
bool "Use SOCFPGA Cyclone 5 UART1 for low-level debug"
select DEBUG_UART_8250
help
Say Y here if you want kernel low-level debugging support
on SOCFPGA(Cyclone 5 and Arria 5) based platforms.
config DEBUG_SUN9I_UART0
bool "Kernel low-level debugging messages via sun9i UART0"
@ -1383,22 +1390,21 @@ config DEBUG_OMAP2PLUS_UART
depends on ARCH_OMAP2PLUS
config DEBUG_IMX_UART_PORT
int "i.MX Debug UART Port Selection" if DEBUG_IMX1_UART || \
DEBUG_IMX25_UART || \
DEBUG_IMX21_IMX27_UART || \
DEBUG_IMX31_UART || \
DEBUG_IMX35_UART || \
DEBUG_IMX50_UART || \
DEBUG_IMX51_UART || \
DEBUG_IMX53_UART || \
DEBUG_IMX6Q_UART || \
DEBUG_IMX6SL_UART || \
DEBUG_IMX6SLL_UART || \
DEBUG_IMX6SX_UART || \
DEBUG_IMX6UL_UART || \
DEBUG_IMX7D_UART
int "i.MX Debug UART Port Selection"
depends on DEBUG_IMX1_UART || \
DEBUG_IMX25_UART || \
DEBUG_IMX21_IMX27_UART || \
DEBUG_IMX31_UART || \
DEBUG_IMX35_UART || \
DEBUG_IMX50_UART || \
DEBUG_IMX51_UART || \
DEBUG_IMX53_UART || \
DEBUG_IMX6Q_UART || \
DEBUG_IMX6SL_UART || \
DEBUG_IMX6SX_UART || \
DEBUG_IMX6UL_UART || \
DEBUG_IMX7D_UART
default 1
depends on ARCH_MXC
help
Choose UART port on which kernel low-level debug messages
should be output.
@ -1594,7 +1600,8 @@ config DEBUG_UART_PHYS
default 0xfe800000 if ARCH_IOP32X
default 0xff690000 if DEBUG_RK32_UART2
default 0xffc02000 if DEBUG_SOCFPGA_UART0
default 0xffc02100 if DEBUG_SOCFPGA_UART1
default 0xffc02100 if DEBUG_SOCFPGA_ARRIA10_UART1
default 0xffc03000 if DEBUG_SOCFPGA_CYCLONE5_UART1
default 0xffd82340 if ARCH_IOP13XX
default 0xffe40000 if DEBUG_RCAR_GEN1_SCIF0
default 0xffe42000 if DEBUG_RCAR_GEN1_SCIF2
@ -1698,7 +1705,8 @@ config DEBUG_UART_VIRT
default 0xfeb30c00 if DEBUG_KEYSTONE_UART0
default 0xfeb31000 if DEBUG_KEYSTONE_UART1
default 0xfec02000 if DEBUG_SOCFPGA_UART0
default 0xfec02100 if DEBUG_SOCFPGA_UART1
default 0xfec02100 if DEBUG_SOCFPGA_ARRIA10_UART1
default 0xfec03000 if DEBUG_SOCFPGA_CYCLONE5_UART1
default 0xfec12000 if (DEBUG_MVEBU_UART0 || DEBUG_MVEBU_UART0_ALTERNATE) && ARCH_MVEBU
default 0xfec12100 if DEBUG_MVEBU_UART1_ALTERNATE
default 0xfec10000 if DEBUG_SIRFATLAS7_UART0
@ -1746,9 +1754,9 @@ config DEBUG_UART_8250_WORD
depends on DEBUG_LL_UART_8250 || DEBUG_UART_8250
depends on DEBUG_UART_8250_SHIFT >= 2
default y if DEBUG_PICOXCELL_UART || \
DEBUG_SOCFPGA_UART0 || DEBUG_SOCFPGA_UART1 || \
DEBUG_KEYSTONE_UART0 || DEBUG_KEYSTONE_UART1 || \
DEBUG_ALPINE_UART0 || \
DEBUG_SOCFPGA_UART0 || DEBUG_SOCFPGA_ARRIA10_UART1 || \
DEBUG_SOCFPGA_CYCLONE5_UART1 || DEBUG_KEYSTONE_UART0 || \
DEBUG_KEYSTONE_UART1 || DEBUG_ALPINE_UART0 || \
DEBUG_DAVINCI_DMx_UART0 || DEBUG_DAVINCI_DA8XX_UART1 || \
DEBUG_DAVINCI_DA8XX_UART2 || \
DEBUG_BCM_KONA_UART || DEBUG_RK32_UART2

View File

@ -1393,7 +1393,21 @@ ENTRY(efi_stub_entry)
@ Preserve return value of efi_entry() in r4
mov r4, r0
bl cache_clean_flush
@ our cache maintenance code relies on CP15 barrier instructions
@ but since we arrived here with the MMU and caches configured
@ by UEFI, we must check that the CP15BEN bit is set in SCTLR.
@ Note that this bit is RAO/WI on v6 and earlier, so the ISB in
@ the enable path will be executed on v7+ only.
mrc p15, 0, r1, c1, c0, 0 @ read SCTLR
tst r1, #(1 << 5) @ CP15BEN bit set?
bne 0f
orr r1, r1, #(1 << 5) @ CP15 barrier instructions
mcr p15, 0, r1, c1, c0, 0 @ write SCTLR
ARM( .inst 0xf57ff06f @ v7+ isb )
THUMB( isb )
0: bl cache_clean_flush
bl cache_off
@ Set parameters for booting zImage according to boot protocol

View File

@ -2,10 +2,14 @@
#ifndef _ARM_LIBFDT_ENV_H
#define _ARM_LIBFDT_ENV_H
#include <linux/limits.h>
#include <linux/types.h>
#include <linux/string.h>
#include <asm/byteorder.h>
#define INT32_MAX S32_MAX
#define UINT32_MAX U32_MAX
typedef __be16 fdt16_t;
typedef __be32 fdt32_t;
typedef __be64 fdt64_t;

View File

@ -57,6 +57,24 @@
enable-active-high;
};
/* TPS79501 */
v1_8d_reg: fixedregulator-v1_8d {
compatible = "regulator-fixed";
regulator-name = "v1_8d";
vin-supply = <&vbat>;
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
};
/* TPS79501 */
v3_3d_reg: fixedregulator-v3_3d {
compatible = "regulator-fixed";
regulator-name = "v3_3d";
vin-supply = <&vbat>;
regulator-min-microvolt = <3300000>;
regulator-max-microvolt = <3300000>;
};
matrix_keypad: matrix_keypad0 {
compatible = "gpio-matrix-keypad";
debounce-delay-ms = <5>;
@ -492,10 +510,10 @@
status = "okay";
/* Regulators */
AVDD-supply = <&vaux2_reg>;
IOVDD-supply = <&vaux2_reg>;
DRVDD-supply = <&vaux2_reg>;
DVDD-supply = <&vbat>;
AVDD-supply = <&v3_3d_reg>;
IOVDD-supply = <&v3_3d_reg>;
DRVDD-supply = <&v3_3d_reg>;
DVDD-supply = <&v1_8d_reg>;
};
};
@ -706,6 +724,7 @@
pinctrl-0 = <&cpsw_default>;
pinctrl-1 = <&cpsw_sleep>;
status = "okay";
slaves = <1>;
};
&davinci_mdio {
@ -713,15 +732,14 @@
pinctrl-0 = <&davinci_mdio_default>;
pinctrl-1 = <&davinci_mdio_sleep>;
status = "okay";
ethphy0: ethernet-phy@0 {
reg = <0>;
};
};
&cpsw_emac0 {
phy_id = <&davinci_mdio>, <0>;
phy-mode = "rgmii-txid";
};
&cpsw_emac1 {
phy_id = <&davinci_mdio>, <1>;
phy-handle = <&ethphy0>;
phy-mode = "rgmii-txid";
};

View File

@ -73,6 +73,24 @@
enable-active-high;
};
/* TPS79518 */
v1_8d_reg: fixedregulator-v1_8d {
compatible = "regulator-fixed";
regulator-name = "v1_8d";
vin-supply = <&vbat>;
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
};
/* TPS78633 */
v3_3d_reg: fixedregulator-v3_3d {
compatible = "regulator-fixed";
regulator-name = "v3_3d";
vin-supply = <&vbat>;
regulator-min-microvolt = <3300000>;
regulator-max-microvolt = <3300000>;
};
leds {
pinctrl-names = "default";
pinctrl-0 = <&user_leds_s0>;
@ -493,10 +511,10 @@
status = "okay";
/* Regulators */
AVDD-supply = <&vaux2_reg>;
IOVDD-supply = <&vaux2_reg>;
DRVDD-supply = <&vaux2_reg>;
DVDD-supply = <&vbat>;
AVDD-supply = <&v3_3d_reg>;
IOVDD-supply = <&v3_3d_reg>;
DRVDD-supply = <&v3_3d_reg>;
DVDD-supply = <&v1_8d_reg>;
};
};

View File

@ -197,7 +197,7 @@
bus-width = <4>;
pinctrl-names = "default";
pinctrl-0 = <&mmc1_pins>;
cd-gpios = <&gpio0 6 GPIO_ACTIVE_HIGH>;
cd-gpios = <&gpio0 6 GPIO_ACTIVE_LOW>;
status = "okay";
};

View File

@ -157,7 +157,7 @@
bus-width = <4>;
pinctrl-names = "default";
pinctrl-0 = <&mmc1_pins>;
cd-gpios = <&gpio0 6 GPIO_ACTIVE_HIGH>;
cd-gpios = <&gpio0 6 GPIO_ACTIVE_LOW>;
status = "okay";
};

View File

@ -1118,6 +1118,8 @@
ti,hwmods = "dss_dispc";
clocks = <&disp_clk>;
clock-names = "fck";
max-memory-bandwidth = <230000000>;
};
rfbi: rfbi@4832a800 {

View File

@ -83,7 +83,7 @@
};
lcd0: display {
compatible = "osddisplays,osd057T0559-34ts", "panel-dpi";
compatible = "osddisplays,osd070t1718-19ts", "panel-dpi";
label = "lcd";
panel-timing {

View File

@ -45,7 +45,7 @@
};
lcd0: display {
compatible = "osddisplays,osd057T0559-34ts", "panel-dpi";
compatible = "osddisplays,osd070t1718-19ts", "panel-dpi";
label = "lcd";
panel-timing {

View File

@ -405,6 +405,7 @@
vqmmc-supply = <&ldo1_reg>;
bus-width = <4>;
cd-gpios = <&gpio6 27 GPIO_ACTIVE_LOW>; /* gpio 219 */
no-1-8-v;
};
&mmc2 {

View File

@ -334,7 +334,7 @@
clock-names = "uartclk", "apb_pclk";
};
ssp: ssp@1000d000 {
ssp: spi@1000d000 {
compatible = "arm,pl022", "arm,primecell";
reg = <0x1000d000 0x1000>;
clocks = <&sspclk>, <&pclk>;

View File

@ -45,7 +45,7 @@
};
/* The voltage to the MMC card is hardwired at 3.3V */
vmmc: fixedregulator@0 {
vmmc: regulator-vmmc {
compatible = "regulator-fixed";
regulator-name = "vmmc";
regulator-min-microvolt = <3300000>;
@ -53,7 +53,7 @@
regulator-boot-on;
};
veth: fixedregulator@0 {
veth: regulator-veth {
compatible = "regulator-fixed";
regulator-name = "veth";
regulator-min-microvolt = <3300000>;
@ -343,7 +343,7 @@
clock-names = "apb_pclk";
};
pb1176_ssp: ssp@1010b000 {
pb1176_ssp: spi@1010b000 {
compatible = "arm,pl022", "arm,primecell";
reg = <0x1010b000 0x1000>;
interrupt-parent = <&intc_dc1176>;

View File

@ -145,7 +145,7 @@
};
/* The voltage to the MMC card is hardwired at 3.3V */
vmmc: fixedregulator@0 {
vmmc: regulator-vmmc {
compatible = "regulator-fixed";
regulator-name = "vmmc";
regulator-min-microvolt = <3300000>;
@ -153,7 +153,7 @@
regulator-boot-on;
};
veth: fixedregulator@0 {
veth: regulator-veth {
compatible = "regulator-fixed";
regulator-name = "veth";
regulator-min-microvolt = <3300000>;
@ -480,7 +480,7 @@
clock-names = "uartclk", "apb_pclk";
};
ssp@1000d000 {
spi@1000d000 {
compatible = "arm,pl022", "arm,primecell";
reg = <0x1000d000 0x1000>;
interrupt-parent = <&intc_pb11mp>;

View File

@ -43,7 +43,7 @@
};
/* The voltage to the MMC card is hardwired at 3.3V */
vmmc: fixedregulator@0 {
vmmc: regulator-vmmc {
compatible = "regulator-fixed";
regulator-name = "vmmc";
regulator-min-microvolt = <3300000>;
@ -51,7 +51,7 @@
regulator-boot-on;
};
veth: fixedregulator@0 {
veth: regulator-veth {
compatible = "regulator-fixed";
regulator-name = "veth";
regulator-min-microvolt = <3300000>;
@ -318,7 +318,7 @@
clock-names = "uartclk", "apb_pclk";
};
ssp: ssp@1000d000 {
ssp: spi@1000d000 {
compatible = "arm,pl022", "arm,primecell";
reg = <0x1000d000 0x1000>;
clocks = <&sspclk>, <&pclk>;
@ -539,4 +539,3 @@
};
};
};

View File

@ -89,7 +89,7 @@
&clearfog_sdhci_cd_pins>;
pinctrl-names = "default";
status = "okay";
vmmc = <&reg_3p3v>;
vmmc-supply = <&reg_3p3v>;
wp-inverted;
};

View File

@ -240,7 +240,7 @@
rootfs@800000 {
label = "rootfs";
reg = <0x800000 0x0f800000>;
reg = <0x800000 0x1f800000>;
};
};
};

View File

@ -566,7 +566,7 @@
};
};
uart1 {
usart1 {
pinctrl_usart1: usart1-0 {
atmel,pins =
<AT91_PIOB 4 AT91_PERIPH_A AT91_PINCTRL_PULL_UP /* PB4 periph A with pullup */

View File

@ -88,7 +88,7 @@
rootfs@800000 {
label = "rootfs";
reg = <0x800000 0x1f800000>;
reg = <0x800000 0x0f800000>;
};
};
};

View File

@ -165,8 +165,8 @@
mdio: mdio@18002000 {
compatible = "brcm,iproc-mdio";
reg = <0x18002000 0x8>;
#size-cells = <1>;
#address-cells = <0>;
#size-cells = <0>;
#address-cells = <1>;
status = "disabled";
gphy0: ethernet-phy@0 {

View File

@ -93,7 +93,7 @@
};
&hdmi {
hpd-gpios = <&gpio 46 GPIO_ACTIVE_LOW>;
hpd-gpios = <&gpio 46 GPIO_ACTIVE_HIGH>;
};
&uart0 {

View File

@ -38,7 +38,7 @@
trips {
cpu-crit {
temperature = <80000>;
temperature = <90000>;
hysteresis = <0>;
type = "critical";
};

View File

@ -169,7 +169,7 @@
sound {
compatible = "simple-audio-card";
simple-audio-card,name = "DA850/OMAP-L138 EVM";
simple-audio-card,name = "DA850-OMAPL138 EVM";
simple-audio-card,widgets =
"Line", "Line In",
"Line", "Line Out";

View File

@ -28,7 +28,7 @@
sound {
compatible = "simple-audio-card";
simple-audio-card,name = "DA850/OMAP-L138 LCDK";
simple-audio-card,name = "DA850-OMAPL138 LCDK";
simple-audio-card,widgets =
"Line", "Line In",
"Line", "Line Out";

View File

@ -87,7 +87,7 @@
status = "okay";
clock-frequency = <100000>;
si5351: clock-generator {
si5351: clock-generator@60 {
compatible = "silabs,si5351a-msop";
reg = <0x60>;
#address-cells = <1>;

View File

@ -155,7 +155,7 @@
0xffffe000 MBUS_ID(0x03, 0x01) 0 0x0000800 /* CESA SRAM 2k */
0xfffff000 MBUS_ID(0x0d, 0x00) 0 0x0000800>; /* PMU SRAM 2k */
spi0: spi-ctrl@10600 {
spi0: spi@10600 {
compatible = "marvell,orion-spi";
#address-cells = <1>;
#size-cells = <0>;
@ -168,7 +168,7 @@
status = "disabled";
};
i2c: i2c-ctrl@11000 {
i2c: i2c@11000 {
compatible = "marvell,mv64xxx-i2c";
reg = <0x11000 0x20>;
#address-cells = <1>;
@ -218,7 +218,7 @@
status = "disabled";
};
spi1: spi-ctrl@14600 {
spi1: spi@14600 {
compatible = "marvell,orion-spi";
#address-cells = <1>;
#size-cells = <0>;

View File

@ -314,6 +314,7 @@
<0 0 0 2 &pcie1_intc 2>,
<0 0 0 3 &pcie1_intc 3>,
<0 0 0 4 &pcie1_intc 4>;
ti,syscon-unaligned-access = <&scm_conf1 0x14 1>;
status = "disabled";
pcie1_intc: interrupt-controller {
interrupt-controller;
@ -367,6 +368,7 @@
<0 0 0 2 &pcie2_intc 2>,
<0 0 0 3 &pcie2_intc 3>,
<0 0 0 4 &pcie2_intc 4>;
ti,syscon-unaligned-access = <&scm_conf1 0x14 2>;
pcie2_intc: interrupt-controller {
interrupt-controller;
#address-cells = <0>;
@ -1540,6 +1542,7 @@
dr_mode = "otg";
snps,dis_u3_susphy_quirk;
snps,dis_u2_susphy_quirk;
snps,dis_metastability_quirk;
};
};

View File

@ -32,7 +32,7 @@
*
* Datamanual Revisions:
*
* AM572x Silicon Revision 2.0: SPRS953B, Revised November 2016
* AM572x Silicon Revision 2.0: SPRS953F, Revised May 2019
* AM572x Silicon Revision 1.1: SPRS915R, Revised November 2016
*
*/
@ -229,45 +229,45 @@
mmc3_pins_default: mmc3_pins_default {
pinctrl-single,pins = <
DRA7XX_CORE_IOPAD(0x377c, (PIN_INPUT_PULLUP | MUX_MODE0)) /* mmc3_clk.mmc3_clk */
DRA7XX_CORE_IOPAD(0x3780, (PIN_INPUT_PULLUP | MUX_MODE0)) /* mmc3_cmd.mmc3_cmd */
DRA7XX_CORE_IOPAD(0x3784, (PIN_INPUT_PULLUP | MUX_MODE0)) /* mmc3_dat0.mmc3_dat0 */
DRA7XX_CORE_IOPAD(0x3788, (PIN_INPUT_PULLUP | MUX_MODE0)) /* mmc3_dat1.mmc3_dat1 */
DRA7XX_CORE_IOPAD(0x378c, (PIN_INPUT_PULLUP | MUX_MODE0)) /* mmc3_dat2.mmc3_dat2 */
DRA7XX_CORE_IOPAD(0x3790, (PIN_INPUT_PULLUP | MUX_MODE0)) /* mmc3_dat3.mmc3_dat3 */
DRA7XX_CORE_IOPAD(0x377c, (PIN_INPUT_PULLUP | MODE_SELECT | MUX_MODE0)) /* mmc3_clk.mmc3_clk */
DRA7XX_CORE_IOPAD(0x3780, (PIN_INPUT_PULLUP | MODE_SELECT | MUX_MODE0)) /* mmc3_cmd.mmc3_cmd */
DRA7XX_CORE_IOPAD(0x3784, (PIN_INPUT_PULLUP | MODE_SELECT | MUX_MODE0)) /* mmc3_dat0.mmc3_dat0 */
DRA7XX_CORE_IOPAD(0x3788, (PIN_INPUT_PULLUP | MODE_SELECT | MUX_MODE0)) /* mmc3_dat1.mmc3_dat1 */
DRA7XX_CORE_IOPAD(0x378c, (PIN_INPUT_PULLUP | MODE_SELECT | MUX_MODE0)) /* mmc3_dat2.mmc3_dat2 */
DRA7XX_CORE_IOPAD(0x3790, (PIN_INPUT_PULLUP | MODE_SELECT | MUX_MODE0)) /* mmc3_dat3.mmc3_dat3 */
>;
};
mmc3_pins_hs: mmc3_pins_hs {
pinctrl-single,pins = <
DRA7XX_CORE_IOPAD(0x377c, (PIN_INPUT_PULLUP | MUX_MODE0)) /* mmc3_clk.mmc3_clk */
DRA7XX_CORE_IOPAD(0x3780, (PIN_INPUT_PULLUP | MUX_MODE0)) /* mmc3_cmd.mmc3_cmd */
DRA7XX_CORE_IOPAD(0x3784, (PIN_INPUT_PULLUP | MUX_MODE0)) /* mmc3_dat0.mmc3_dat0 */
DRA7XX_CORE_IOPAD(0x3788, (PIN_INPUT_PULLUP | MUX_MODE0)) /* mmc3_dat1.mmc3_dat1 */
DRA7XX_CORE_IOPAD(0x378c, (PIN_INPUT_PULLUP | MUX_MODE0)) /* mmc3_dat2.mmc3_dat2 */
DRA7XX_CORE_IOPAD(0x3790, (PIN_INPUT_PULLUP | MUX_MODE0)) /* mmc3_dat3.mmc3_dat3 */
DRA7XX_CORE_IOPAD(0x377c, (PIN_INPUT_PULLUP | MODE_SELECT | MUX_MODE0)) /* mmc3_clk.mmc3_clk */
DRA7XX_CORE_IOPAD(0x3780, (PIN_INPUT_PULLUP | MODE_SELECT | MUX_MODE0)) /* mmc3_cmd.mmc3_cmd */
DRA7XX_CORE_IOPAD(0x3784, (PIN_INPUT_PULLUP | MODE_SELECT | MUX_MODE0)) /* mmc3_dat0.mmc3_dat0 */
DRA7XX_CORE_IOPAD(0x3788, (PIN_INPUT_PULLUP | MODE_SELECT | MUX_MODE0)) /* mmc3_dat1.mmc3_dat1 */
DRA7XX_CORE_IOPAD(0x378c, (PIN_INPUT_PULLUP | MODE_SELECT | MUX_MODE0)) /* mmc3_dat2.mmc3_dat2 */
DRA7XX_CORE_IOPAD(0x3790, (PIN_INPUT_PULLUP | MODE_SELECT | MUX_MODE0)) /* mmc3_dat3.mmc3_dat3 */
>;
};
mmc3_pins_sdr12: mmc3_pins_sdr12 {
pinctrl-single,pins = <
DRA7XX_CORE_IOPAD(0x377c, (PIN_INPUT_PULLUP | MUX_MODE0)) /* mmc3_clk.mmc3_clk */
DRA7XX_CORE_IOPAD(0x3780, (PIN_INPUT_PULLUP | MUX_MODE0)) /* mmc3_cmd.mmc3_cmd */
DRA7XX_CORE_IOPAD(0x3784, (PIN_INPUT_PULLUP | MUX_MODE0)) /* mmc3_dat0.mmc3_dat0 */
DRA7XX_CORE_IOPAD(0x3788, (PIN_INPUT_PULLUP | MUX_MODE0)) /* mmc3_dat1.mmc3_dat1 */
DRA7XX_CORE_IOPAD(0x378c, (PIN_INPUT_PULLUP | MUX_MODE0)) /* mmc3_dat2.mmc3_dat2 */
DRA7XX_CORE_IOPAD(0x3790, (PIN_INPUT_PULLUP | MUX_MODE0)) /* mmc3_dat3.mmc3_dat3 */
DRA7XX_CORE_IOPAD(0x377c, (PIN_INPUT_PULLUP | MODE_SELECT | MUX_MODE0)) /* mmc3_clk.mmc3_clk */
DRA7XX_CORE_IOPAD(0x3780, (PIN_INPUT_PULLUP | MODE_SELECT | MUX_MODE0)) /* mmc3_cmd.mmc3_cmd */
DRA7XX_CORE_IOPAD(0x3784, (PIN_INPUT_PULLUP | MODE_SELECT | MUX_MODE0)) /* mmc3_dat0.mmc3_dat0 */
DRA7XX_CORE_IOPAD(0x3788, (PIN_INPUT_PULLUP | MODE_SELECT | MUX_MODE0)) /* mmc3_dat1.mmc3_dat1 */
DRA7XX_CORE_IOPAD(0x378c, (PIN_INPUT_PULLUP | MODE_SELECT | MUX_MODE0)) /* mmc3_dat2.mmc3_dat2 */
DRA7XX_CORE_IOPAD(0x3790, (PIN_INPUT_PULLUP | MODE_SELECT | MUX_MODE0)) /* mmc3_dat3.mmc3_dat3 */
>;
};
mmc3_pins_sdr25: mmc3_pins_sdr25 {
pinctrl-single,pins = <
DRA7XX_CORE_IOPAD(0x377c, (PIN_INPUT_PULLUP | MUX_MODE0)) /* mmc3_clk.mmc3_clk */
DRA7XX_CORE_IOPAD(0x3780, (PIN_INPUT_PULLUP | MUX_MODE0)) /* mmc3_cmd.mmc3_cmd */
DRA7XX_CORE_IOPAD(0x3784, (PIN_INPUT_PULLUP | MUX_MODE0)) /* mmc3_dat0.mmc3_dat0 */
DRA7XX_CORE_IOPAD(0x3788, (PIN_INPUT_PULLUP | MUX_MODE0)) /* mmc3_dat1.mmc3_dat1 */
DRA7XX_CORE_IOPAD(0x378c, (PIN_INPUT_PULLUP | MUX_MODE0)) /* mmc3_dat2.mmc3_dat2 */
DRA7XX_CORE_IOPAD(0x3790, (PIN_INPUT_PULLUP | MUX_MODE0)) /* mmc3_dat3.mmc3_dat3 */
DRA7XX_CORE_IOPAD(0x377c, (PIN_INPUT_PULLUP | MODE_SELECT | MUX_MODE0)) /* mmc3_clk.mmc3_clk */
DRA7XX_CORE_IOPAD(0x3780, (PIN_INPUT_PULLUP | MODE_SELECT | MUX_MODE0)) /* mmc3_cmd.mmc3_cmd */
DRA7XX_CORE_IOPAD(0x3784, (PIN_INPUT_PULLUP | MODE_SELECT | MUX_MODE0)) /* mmc3_dat0.mmc3_dat0 */
DRA7XX_CORE_IOPAD(0x3788, (PIN_INPUT_PULLUP | MODE_SELECT | MUX_MODE0)) /* mmc3_dat1.mmc3_dat1 */
DRA7XX_CORE_IOPAD(0x378c, (PIN_INPUT_PULLUP | MODE_SELECT | MUX_MODE0)) /* mmc3_dat2.mmc3_dat2 */
DRA7XX_CORE_IOPAD(0x3790, (PIN_INPUT_PULLUP | MODE_SELECT | MUX_MODE0)) /* mmc3_dat3.mmc3_dat3 */
>;
};

View File

@ -172,6 +172,9 @@
interrupt-controller;
#interrupt-cells = <3>;
interrupt-parent = <&gic>;
clock-names = "clkout8";
clocks = <&cmu CLK_FIN_PLL>;
#clock-cells = <1>;
};
mipi_phy: video-phy {
@ -356,7 +359,7 @@
};
hsotg: hsotg@12480000 {
compatible = "snps,dwc2";
compatible = "samsung,s3c6400-hsotg", "snps,dwc2";
reg = <0x12480000 0x20000>;
interrupts = <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&cmu CLK_USBOTG>;

View File

@ -60,7 +60,7 @@
};
emmc_pwrseq: pwrseq {
pinctrl-0 = <&sd1_cd>;
pinctrl-0 = <&emmc_rstn>;
pinctrl-names = "default";
compatible = "mmc-pwrseq-emmc";
reset-gpios = <&gpk1 2 GPIO_ACTIVE_LOW>;
@ -161,12 +161,6 @@
cpu0-supply = <&buck2_reg>;
};
/* RSTN signal for eMMC */
&sd1_cd {
samsung,pin-pud = <EXYNOS_PIN_PULL_NONE>;
samsung,pin-drv = <EXYNOS4_PIN_DRV_LV1>;
};
&pinctrl_1 {
gpio_power_key: power_key {
samsung,pins = "gpx1-3";
@ -184,6 +178,11 @@
samsung,pins = "gpx3-7";
samsung,pin-pud = <EXYNOS_PIN_PULL_DOWN>;
};
emmc_rstn: emmc-rstn {
samsung,pins = "gpk1-2";
samsung,pin-pud = <EXYNOS_PIN_PULL_NONE>;
};
};
&ehci {

View File

@ -169,6 +169,8 @@
reg = <0x66>;
interrupt-parent = <&gpx3>;
interrupts = <2 IRQ_TYPE_LEVEL_LOW>;
pinctrl-names = "default";
pinctrl-0 = <&s5m8767_irq>;
vinb1-supply = <&main_dc_reg>;
vinb2-supply = <&main_dc_reg>;
@ -544,6 +546,13 @@
cap-sd-highspeed;
};
&pinctrl_0 {
s5m8767_irq: s5m8767-irq {
samsung,pins = "gpx3-2";
samsung,pin-pud = <EXYNOS_PIN_PULL_NONE>;
};
};
&rtc {
status = "okay";
};

View File

@ -23,6 +23,14 @@
samsung,model = "Snow-I2S-MAX98090";
samsung,audio-codec = <&max98090>;
cpu {
sound-dai = <&i2s0 0>;
};
codec {
sound-dai = <&max98090 0>, <&hdmi>;
};
};
};
@ -34,6 +42,9 @@
interrupt-parent = <&gpx0>;
pinctrl-names = "default";
pinctrl-0 = <&max98090_irq>;
clocks = <&pmu_system_controller 0>;
clock-names = "mclk";
#sound-dai-cells = <1>;
};
};

View File

@ -226,7 +226,7 @@
wakeup-interrupt-controller {
compatible = "samsung,exynos4210-wakeup-eint";
interrupt-parent = <&gic>;
interrupts = <GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>;
interrupts = <GIC_SPI 48 IRQ_TYPE_LEVEL_HIGH>;
};
};

View File

@ -109,6 +109,7 @@
regulator-name = "PVDD_APIO_1V8";
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
regulator-always-on;
};
ldo3_reg: LDO3 {
@ -147,6 +148,7 @@
regulator-name = "PVDD_ABB_1V8";
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
regulator-always-on;
};
ldo9_reg: LDO9 {

View File

@ -301,6 +301,7 @@
regulator-name = "vdd_1v35";
regulator-min-microvolt = <1350000>;
regulator-max-microvolt = <1350000>;
regulator-always-on;
regulator-boot-on;
regulator-state-mem {
regulator-on-in-suspend;
@ -322,6 +323,7 @@
regulator-name = "vdd_2v";
regulator-min-microvolt = <2000000>;
regulator-max-microvolt = <2000000>;
regulator-always-on;
regulator-boot-on;
regulator-state-mem {
regulator-on-in-suspend;
@ -332,6 +334,7 @@
regulator-name = "vdd_1v8";
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
regulator-always-on;
regulator-boot-on;
regulator-state-mem {
regulator-on-in-suspend;
@ -426,6 +429,7 @@
regulator-name = "vdd_ldo10";
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
regulator-always-on;
regulator-state-mem {
regulator-off-in-suspend;
};

View File

@ -23,7 +23,7 @@
"Headphone Jack", "HPL",
"Headphone Jack", "HPR",
"Headphone Jack", "MICBIAS",
"IN1", "Headphone Jack",
"IN12", "Headphone Jack",
"Speakers", "SPKL",
"Speakers", "SPKR";

Some files were not shown because too many files have changed in this diff Show More