This is the 5.4.33 stable release

-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAl6ZbdIACgkQONu9yGCS
 aT5Jqw/7BZ639nTAAmz809yOF1JBhvBptRg9tBKYAfw62DzfZe5s9IZA6znIt0f0
 nlluLvnhHlDSpgycHNkFry5AkiCUpRQW6NY681xITm918w3BxsKX2pfCawojIOSw
 YBaTWoWqNFQQWlC18L1CWJmIvIktSCHXBMTVDpnvRv7sw5A4Oe/zarzVDNb0A6OJ
 ThaR7LAKJrUEDDLuCOuB/IrYCOpOzg2SkViFmlo4wmmhvCSi8PXvf3royrSFXxM/
 Y1bs67Hu/uqeHl8Y2RaMZpXf1aW9F31sooca4GD+UnVoWppjIOKRyuGLTrXKv7pw
 /goIzlE8wfJz5K0iQ4UcbXwwdY81L9UlMdVsmIWHHgxMjSp1J5mfQ5TLUC/VK3UO
 Ll9tCYBwH4FjzxNRJq7if8TDAfgPzyhw4BMchgXZWzW1oasl51T2uEye3KgFXQSb
 u6TwCx4KGS0w/Q81SKis83Pb0unHGanJOSCZxI1B44raf0ruCBpTYUc713pfegWT
 46YtwoorAK8N+GpFQA1tsTJvVclqCF5bHVE19TMvXV4UX/VTPtbIAUE7vnvcxcqO
 uh0b9Jfmd6Fcgh7VZQCH7CUYnsyJmGqj2kycB1p+T8UB6H+PCeuQBZBl8sJnf8oj
 d5NIzB7WWXBQuEG5XuPtxg6+ARMPEIpd2exEVn9ZOv5qhesBo04=
 =pjAq
 -----END PGP SIGNATURE-----

Merge tag 'v5.4.33' into 5.4-1.0.0-imx

This is the 5.4.33 stable release
This commit is contained in:
Gary Bisson 2020-04-17 16:07:32 +02:00
commit 4b462e01ed
266 changed files with 2914 additions and 1237 deletions

View File

@ -8,3 +8,4 @@ HD-Audio
models
controls
dp-mst
realtek-pc-beep

View File

@ -216,8 +216,6 @@ alc298-dell-aio
ALC298 fixups on Dell AIO machines
alc275-dell-xps
ALC275 fixups on Dell XPS models
alc256-dell-xps13
ALC256 fixups on Dell XPS13
lenovo-spk-noise
Workaround for speaker noise on Lenovo machines
lenovo-hotkey

View File

@ -0,0 +1,129 @@
===============================
Realtek PC Beep Hidden Register
===============================
This file documents the "PC Beep Hidden Register", which is present in certain
Realtek HDA codecs and controls a muxer and pair of passthrough mixers that can
route audio between pins but aren't themselves exposed as HDA widgets. As far
as I can tell, these hidden routes are designed to allow flexible PC Beep output
for codecs that don't have mixer widgets in their output paths. Why it's easier
to hide a mixer behind an undocumented vendor register than to just expose it
as a widget, I have no idea.
Register Description
====================
The register is accessed via processing coefficient 0x36 on NID 20h. Bits not
identified below have no discernible effect on my machine, a Dell XPS 13 9350::
MSB LSB
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |h|S|L| | B |R| | Known bits
+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+
|0|0|1|1| 0x7 |0|0x0|1| 0x7 | Reset value
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1Ah input select (B): 2 bits
When zero, expose the PC Beep line (from the internal beep generator, when
enabled with the Set Beep Generation verb on NID 01h, or else from the
external PCBEEP pin) on the 1Ah pin node. When nonzero, expose the headphone
jack (or possibly Line In on some machines) input instead. If PC Beep is
selected, the 1Ah boost control has no effect.
Amplify 1Ah loopback, left (L): 1 bit
Amplify the left channel of 1Ah before mixing it into outputs as specified
by h and S bits. Does not affect the level of 1Ah exposed to other widgets.
Amplify 1Ah loopback, right (R): 1 bit
Amplify the right channel of 1Ah before mixing it into outputs as specified
by h and S bits. Does not affect the level of 1Ah exposed to other widgets.
Loopback 1Ah to 21h [active low] (h): 1 bit
When zero, mix 1Ah (possibly with amplification, depending on L and R bits)
into 21h (headphone jack on my machine). Mixed signal respects the mute
setting on 21h.
Loopback 1Ah to 14h (S): 1 bit
When one, mix 1Ah (possibly with amplification, depending on L and R bits)
into 14h (internal speaker on my machine). Mixed signal **ignores** the mute
setting on 14h and is present whenever 14h is configured as an output.
Path diagrams
=============
1Ah input selection (DIV is the PC Beep divider set on NID 01h)::
<Beep generator> <PCBEEP pin> <Headphone jack>
| | |
+--DIV--+--!DIV--+ {1Ah boost control}
| |
+--(b == 0)--+--(b != 0)--+
|
>1Ah (Beep/Headphone Mic/Line In)<
Loopback of 1Ah to 21h/14h::
<1Ah (Beep/Headphone Mic/Line In)>
|
{amplify if L/R}
|
+-----!h-----+-----S-----+
| |
{21h mute control} |
| |
>21h (Headphone)< >14h (Internal Speaker)<
Background
==========
All Realtek HDA codecs have a vendor-defined widget with node ID 20h which
provides access to a bank of registers that control various codec functions.
Registers are read and written via the standard HDA processing coefficient
verbs (Set/Get Coefficient Index, Set/Get Processing Coefficient). The node is
named "Realtek Vendor Registers" in public datasheets' verb listings and,
apart from that, is entirely undocumented.
This particular register, exposed at coefficient 0x36 and named in commits from
Realtek, is of note: unlike most registers, which seem to control detailed
amplifier parameters not in scope of the HDA specification, it controls audio
routing which could just as easily have been defined using standard HDA mixer
and selector widgets.
Specifically, it selects between two sources for the input pin widget with Node
ID (NID) 1Ah: the widget's signal can come either from an audio jack (on my
laptop, a Dell XPS 13 9350, it's the headphone jack, but comments in Realtek
commits indicate that it might be a Line In on some machines) or from the PC
Beep line (which is itself multiplexed between the codec's internal beep
generator and external PCBEEP pin, depending on if the beep generator is
enabled via verbs on NID 01h). Additionally, it can mix (with optional
amplification) that signal onto the 21h and/or 14h output pins.
The register's reset value is 0x3717, corresponding to PC Beep on 1Ah that is
then amplified and mixed into both the headphones and the speakers. Not only
does this violate the HDA specification, which says that "[a vendor defined
beep input pin] connection may be maintained *only* while the Link reset
(**RST#**) is asserted", it means that we cannot ignore the register if we care
about the input that 1Ah would otherwise expose or if the PCBEEP trace is
poorly shielded and picks up chassis noise (both of which are the case on my
machine).
Unfortunately, there are lots of ways to get this register configuration wrong.
Linux, it seems, has gone through most of them. For one, the register resets
after S3 suspend: judging by existing code, this isn't the case for all vendor
registers, and it's led to some fixes that improve behavior on cold boot but
don't last after suspend. Other fixes have successfully switched the 1Ah input
away from PC Beep but have failed to disable both loopback paths. On my
machine, this means that the headphone input is amplified and looped back to
the headphone output, which uses the exact same pins! As you might expect, this
causes terrible headphone noise, the character of which is controlled by the
1Ah boost control. (If you've seen instructions online to fix XPS 13 headphone
noise by changing "Headphone Mic Boost" in ALSA, now you know why.)
The information here has been obtained through black-box reverse engineering of
the ALC256 codec's behavior and is not guaranteed to be correct. It likely
also applies for the ALC255, ALC257, ALC235, and ALC236, since those codecs
seem to be close relatives of the ALC256. (They all share one initialization
function.) Additionally, other codecs like the ALC225 and ALC285 also have this
register, judging by existing fixups in ``patch_realtek.c``, but specific
data (e.g. node IDs, bit positions, pin mappings) for those codecs may differ
from what I've described here.

View File

@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
VERSION = 5
PATCHLEVEL = 4
SUBLEVEL = 32
SUBLEVEL = 33
EXTRAVERSION =
NAME = Kleptomaniac Octopus

View File

@ -24,12 +24,12 @@
&cpsw_emac0 {
phy-handle = <&ethphy0>;
phy-mode = "rgmii";
phy-mode = "rgmii-id";
};
&cpsw_emac1 {
phy-handle = <&ethphy1>;
phy-mode = "rgmii";
phy-mode = "rgmii-id";
};
&davinci_mdio {

View File

@ -33,12 +33,12 @@
&cpsw_emac0 {
phy-handle = <&ethphy0>;
phy-mode = "rgmii";
phy-mode = "rgmii-id";
};
&cpsw_emac1 {
phy-handle = <&ethphy1>;
phy-mode = "rgmii";
phy-mode = "rgmii-id";
};
&davinci_mdio {

View File

@ -24,12 +24,12 @@
&cpsw_emac0 {
phy-handle = <&ethphy0>;
phy-mode = "rgmii";
phy-mode = "rgmii-id";
};
&cpsw_emac1 {
phy-handle = <&ethphy1>;
phy-mode = "rgmii";
phy-mode = "rgmii-id";
};
&davinci_mdio {

View File

@ -115,7 +115,7 @@
gpio-sck = <&gpy3 1 GPIO_ACTIVE_HIGH>;
gpio-mosi = <&gpy3 3 GPIO_ACTIVE_HIGH>;
num-chipselects = <1>;
cs-gpios = <&gpy4 3 GPIO_ACTIVE_HIGH>;
cs-gpios = <&gpy4 3 GPIO_ACTIVE_LOW>;
lcd@0 {
compatible = "samsung,ld9040";
@ -124,8 +124,6 @@
vci-supply = <&ldo17_reg>;
reset-gpios = <&gpy4 5 GPIO_ACTIVE_HIGH>;
spi-max-frequency = <1200000>;
spi-cpol;
spi-cpha;
power-on-delay = <10>;
reset-delay = <10>;
panel-width-mm = <90>;

View File

@ -358,8 +358,8 @@
};
&reg_dldo3 {
regulator-min-microvolt = <2800000>;
regulator-max-microvolt = <2800000>;
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
regulator-name = "vdd-csi";
};

View File

@ -72,6 +72,10 @@ stack_protector_prepare: prepare0
include/generated/asm-offsets.h))
endif
# Ensure that if the compiler supports branch protection we default it
# off.
KBUILD_CFLAGS += $(call cc-option,-mbranch-protection=none)
ifeq ($(CONFIG_CPU_BIG_ENDIAN), y)
KBUILD_CPPFLAGS += -mbig-endian
CHECKFLAGS += -D__AARCH64EB__

View File

@ -77,8 +77,7 @@
};
pmu {
compatible = "arm,cortex-a53-pmu",
"arm,armv8-pmuv3";
compatible = "arm,cortex-a53-pmu";
interrupts = <GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 118 IRQ_TYPE_LEVEL_HIGH>,

View File

@ -71,8 +71,7 @@
};
pmu {
compatible = "arm,cortex-a53-pmu",
"arm,armv8-pmuv3";
compatible = "arm,cortex-a53-pmu";
interrupts = <GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 142 IRQ_TYPE_LEVEL_HIGH>,

View File

@ -307,6 +307,7 @@
interrupts = <GIC_SPI 97 IRQ_TYPE_LEVEL_HIGH>;
dma-coherent;
power-domains = <&k3_pds 151 TI_SCI_PD_EXCLUSIVE>;
clocks = <&k3_clks 151 2>, <&k3_clks 151 7>;
assigned-clocks = <&k3_clks 151 2>, <&k3_clks 151 7>;
assigned-clock-parents = <&k3_clks 151 4>, /* set REF_CLK to 20MHz i.e. PER0_PLL/48 */
<&k3_clks 151 9>; /* set PIPE3_TXB_CLK to CLK_12M_RC/256 (for HS only) */
@ -346,6 +347,7 @@
interrupts = <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>;
dma-coherent;
power-domains = <&k3_pds 152 TI_SCI_PD_EXCLUSIVE>;
clocks = <&k3_clks 152 2>;
assigned-clocks = <&k3_clks 152 2>;
assigned-clock-parents = <&k3_clks 152 4>; /* set REF_CLK to 20MHz i.e. PER0_PLL/48 */

View File

@ -601,7 +601,7 @@ static struct undef_hook setend_hooks[] = {
},
{
/* Thumb mode */
.instr_mask = 0x0000fff7,
.instr_mask = 0xfffffff7,
.instr_val = 0x0000b650,
.pstate_mask = (PSR_AA32_T_BIT | PSR_AA32_MODE_MASK),
.pstate_val = (PSR_AA32_T_BIT | PSR_AA32_MODE_USR),

View File

@ -2199,6 +2199,9 @@ static int octeon_irq_cib_map(struct irq_domain *d,
}
cd = kzalloc(sizeof(*cd), GFP_KERNEL);
if (!cd)
return -ENOMEM;
cd->host_data = host_data;
cd->bit = hw;

View File

@ -1480,6 +1480,7 @@ static void build_r4000_tlb_refill_handler(void)
static void setup_pw(void)
{
unsigned int pwctl;
unsigned long pgd_i, pgd_w;
#ifndef __PAGETABLE_PMD_FOLDED
unsigned long pmd_i, pmd_w;
@ -1506,6 +1507,7 @@ static void setup_pw(void)
pte_i = ilog2(_PAGE_GLOBAL);
pte_w = 0;
pwctl = 1 << 30; /* Set PWDirExt */
#ifndef __PAGETABLE_PMD_FOLDED
write_c0_pwfield(pgd_i << 24 | pmd_i << 12 | pt_i << 6 | pte_i);
@ -1516,8 +1518,9 @@ static void setup_pw(void)
#endif
#ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT
write_c0_pwctl(1 << 6 | psn);
pwctl |= (1 << 6 | psn);
#endif
write_c0_pwctl(pwctl);
write_c0_kpgd((long)swapper_pg_dir);
kscratch_used_mask |= (1 << 7); /* KScratch6 is used for KPGD */
}

View File

@ -156,6 +156,12 @@ extern pmd_t hash__pmdp_huge_get_and_clear(struct mm_struct *mm,
extern int hash__has_transparent_hugepage(void);
#endif
static inline pmd_t hash__pmd_mkdevmap(pmd_t pmd)
{
BUG();
return pmd;
}
#endif /* !__ASSEMBLY__ */
#endif /* _ASM_POWERPC_BOOK3S_64_HASH_4K_H */

View File

@ -246,7 +246,7 @@ static inline void mark_hpte_slot_valid(unsigned char *hpte_slot_array,
*/
static inline int hash__pmd_trans_huge(pmd_t pmd)
{
return !!((pmd_val(pmd) & (_PAGE_PTE | H_PAGE_THP_HUGE)) ==
return !!((pmd_val(pmd) & (_PAGE_PTE | H_PAGE_THP_HUGE | _PAGE_DEVMAP)) ==
(_PAGE_PTE | H_PAGE_THP_HUGE));
}
@ -272,6 +272,12 @@ extern pmd_t hash__pmdp_huge_get_and_clear(struct mm_struct *mm,
unsigned long addr, pmd_t *pmdp);
extern int hash__has_transparent_hugepage(void);
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
static inline pmd_t hash__pmd_mkdevmap(pmd_t pmd)
{
return __pmd(pmd_val(pmd) | (_PAGE_PTE | H_PAGE_THP_HUGE | _PAGE_DEVMAP));
}
#endif /* __ASSEMBLY__ */
#endif /* _ASM_POWERPC_BOOK3S_64_HASH_64K_H */

View File

@ -1303,7 +1303,9 @@ extern void serialize_against_pte_lookup(struct mm_struct *mm);
static inline pmd_t pmd_mkdevmap(pmd_t pmd)
{
return __pmd(pmd_val(pmd) | (_PAGE_PTE | _PAGE_DEVMAP));
if (radix_enabled())
return radix__pmd_mkdevmap(pmd);
return hash__pmd_mkdevmap(pmd);
}
static inline int pmd_devmap(pmd_t pmd)

View File

@ -263,6 +263,11 @@ static inline int radix__has_transparent_hugepage(void)
}
#endif
static inline pmd_t radix__pmd_mkdevmap(pmd_t pmd)
{
return __pmd(pmd_val(pmd) | (_PAGE_PTE | _PAGE_DEVMAP));
}
extern int __meminit radix__vmemmap_create_mapping(unsigned long start,
unsigned long page_size,
unsigned long phys);

View File

@ -27,12 +27,12 @@ struct drmem_lmb_info {
extern struct drmem_lmb_info *drmem_info;
#define for_each_drmem_lmb_in_range(lmb, start, end) \
for ((lmb) = (start); (lmb) <= (end); (lmb)++)
for ((lmb) = (start); (lmb) < (end); (lmb)++)
#define for_each_drmem_lmb(lmb) \
for_each_drmem_lmb_in_range((lmb), \
&drmem_info->lmbs[0], \
&drmem_info->lmbs[drmem_info->n_lmbs - 1])
&drmem_info->lmbs[drmem_info->n_lmbs])
/*
* The of_drconf_cell_v1 struct defines the layout of the LMB data

View File

@ -7,7 +7,9 @@
#define JMP_BUF_LEN 23
extern long setjmp(long *) __attribute__((returns_twice));
extern void longjmp(long *, long) __attribute__((noreturn));
typedef long jmp_buf[JMP_BUF_LEN];
extern int setjmp(jmp_buf env) __attribute__((returns_twice));
extern void longjmp(jmp_buf env, int val) __attribute__((noreturn));
#endif /* _ASM_POWERPC_SETJMP_H */

View File

@ -5,9 +5,6 @@
CFLAGS_ptrace.o += -DUTS_MACHINE='"$(UTS_MACHINE)"'
# Avoid clang warnings around longjmp/setjmp declarations
CFLAGS_crash.o += -ffreestanding
ifdef CONFIG_PPC64
CFLAGS_prom_init.o += $(NO_MINIMAL_TOC)
endif

View File

@ -139,7 +139,6 @@ static void __init cpufeatures_setup_cpu(void)
/* Initialize the base environment -- clear FSCR/HFSCR. */
hv_mode = !!(mfmsr() & MSR_HV);
if (hv_mode) {
/* CPU_FTR_HVMODE is used early in PACA setup */
cur_cpu_spec->cpu_features |= CPU_FTR_HVMODE;
mtspr(SPRN_HFSCR, 0);
}

View File

@ -264,6 +264,9 @@ int kprobe_handler(struct pt_regs *regs)
if (user_mode(regs))
return 0;
if (!(regs->msr & MSR_IR) || !(regs->msr & MSR_DR))
return 0;
/*
* We don't want to be preempted for the entire
* duration of kprobe processing

View File

@ -176,7 +176,7 @@ static struct slb_shadow * __init new_slb_shadow(int cpu, unsigned long limit)
struct paca_struct **paca_ptrs __read_mostly;
EXPORT_SYMBOL(paca_ptrs);
void __init initialise_paca(struct paca_struct *new_paca, int cpu)
void __init __nostackprotector initialise_paca(struct paca_struct *new_paca, int cpu)
{
#ifdef CONFIG_PPC_PSERIES
new_paca->lppaca_ptr = NULL;
@ -205,7 +205,7 @@ void __init initialise_paca(struct paca_struct *new_paca, int cpu)
}
/* Put the paca pointer into r13 and SPRG_PACA */
void setup_paca(struct paca_struct *new_paca)
void __nostackprotector setup_paca(struct paca_struct *new_paca)
{
/* Setup r13 */
local_paca = new_paca;
@ -214,11 +214,15 @@ void setup_paca(struct paca_struct *new_paca)
/* On Book3E, initialize the TLB miss exception frames */
mtspr(SPRN_SPRG_TLB_EXFRAME, local_paca->extlb);
#else
/* In HV mode, we setup both HPACA and PACA to avoid problems
/*
* In HV mode, we setup both HPACA and PACA to avoid problems
* if we do a GET_PACA() before the feature fixups have been
* applied
* applied.
*
* Normally you should test against CPU_FTR_HVMODE, but CPU features
* are not yet set up when we first reach here.
*/
if (early_cpu_has_feature(CPU_FTR_HVMODE))
if (mfmsr() & MSR_HV)
mtspr(SPRN_SPRG_HPACA, local_paca);
#endif
mtspr(SPRN_SPRG_PACA, local_paca);

View File

@ -8,6 +8,12 @@
#ifndef __ARCH_POWERPC_KERNEL_SETUP_H
#define __ARCH_POWERPC_KERNEL_SETUP_H
#ifdef CONFIG_CC_IS_CLANG
#define __nostackprotector
#else
#define __nostackprotector __attribute__((__optimize__("no-stack-protector")))
#endif
void initialize_cache_info(void);
void irqstack_early_init(void);

View File

@ -284,24 +284,42 @@ void __init record_spr_defaults(void)
* device-tree is not accessible via normal means at this point.
*/
void __init early_setup(unsigned long dt_ptr)
void __init __nostackprotector early_setup(unsigned long dt_ptr)
{
static __initdata struct paca_struct boot_paca;
/* -------- printk is _NOT_ safe to use here ! ------- */
/* Try new device tree based feature discovery ... */
if (!dt_cpu_ftrs_init(__va(dt_ptr)))
/* Otherwise use the old style CPU table */
identify_cpu(0, mfspr(SPRN_PVR));
/* Assume we're on cpu 0 for now. Don't write to the paca yet! */
/*
* Assume we're on cpu 0 for now.
*
* We need to load a PACA very early for a few reasons.
*
* The stack protector canary is stored in the paca, so as soon as we
* call any stack protected code we need r13 pointing somewhere valid.
*
* If we are using kcov it will call in_task() in its instrumentation,
* which relies on the current task from the PACA.
*
* dt_cpu_ftrs_init() calls into generic OF/fdt code, as well as
* printk(), which can trigger both stack protector and kcov.
*
* percpu variables and spin locks also use the paca.
*
* So set up a temporary paca. It will be replaced below once we know
* what CPU we are on.
*/
initialise_paca(&boot_paca, 0);
setup_paca(&boot_paca);
fixup_boot_paca();
/* -------- printk is now safe to use ------- */
/* Try new device tree based feature discovery ... */
if (!dt_cpu_ftrs_init(__va(dt_ptr)))
/* Otherwise use the old style CPU table */
identify_cpu(0, mfspr(SPRN_PVR));
/* Enable early debugging if any specified (see udbg.h) */
udbg_early_init();

View File

@ -473,8 +473,10 @@ static long restore_tm_sigcontexts(struct task_struct *tsk,
err |= __get_user(tsk->thread.ckpt_regs.ccr,
&sc->gp_regs[PT_CCR]);
/* Don't allow userspace to set the trap value */
regs->trap = 0;
/* These regs are not checkpointed; they can go in 'regs'. */
err |= __get_user(regs->trap, &sc->gp_regs[PT_TRAP]);
err |= __get_user(regs->dar, &sc->gp_regs[PT_DAR]);
err |= __get_user(regs->dsisr, &sc->gp_regs[PT_DSISR]);
err |= __get_user(regs->result, &sc->gp_regs[PT_RESULT]);

View File

@ -117,7 +117,7 @@ static void __init kasan_remap_early_shadow_ro(void)
kasan_populate_pte(kasan_early_shadow_pte, prot);
for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) {
for (k_cur = k_start & PAGE_MASK; k_cur != k_end; k_cur += PAGE_SIZE) {
pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(k_cur), k_cur), k_cur);
pte_t *ptep = pte_offset_kernel(pmd, k_cur);

View File

@ -397,7 +397,7 @@ _GLOBAL(set_context)
* extern void loadcam_entry(unsigned int index)
*
* Load TLBCAM[index] entry in to the L2 CAM MMU
* Must preserve r7, r8, r9, and r10
* Must preserve r7, r8, r9, r10 and r11
*/
_GLOBAL(loadcam_entry)
mflr r5
@ -433,6 +433,10 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_BIG_PHYS)
*/
_GLOBAL(loadcam_multi)
mflr r8
/* Don't switch to AS=1 if already there */
mfmsr r11
andi. r11,r11,MSR_IS
bne 10f
/*
* Set up temporary TLB entry that is the same as what we're
@ -458,6 +462,7 @@ _GLOBAL(loadcam_multi)
mtmsr r6
isync
10:
mr r9,r3
add r10,r3,r4
2: bl loadcam_entry
@ -466,6 +471,10 @@ _GLOBAL(loadcam_multi)
mr r3,r9
blt 2b
/* Don't return to AS=0 if we were in AS=1 at function start */
andi. r11,r11,MSR_IS
bne 3f
/* Return to AS=0 and clear the temporary entry */
mfmsr r6
rlwinm. r6,r6,0,~(MSR_IS|MSR_DS)
@ -481,6 +490,7 @@ _GLOBAL(loadcam_multi)
tlbwe
isync
3:
mtlr r8
blr
#endif

View File

@ -223,7 +223,7 @@ static int get_lmb_range(u32 drc_index, int n_lmbs,
struct drmem_lmb **end_lmb)
{
struct drmem_lmb *lmb, *start, *end;
struct drmem_lmb *last_lmb;
struct drmem_lmb *limit;
start = NULL;
for_each_drmem_lmb(lmb) {
@ -236,10 +236,10 @@ static int get_lmb_range(u32 drc_index, int n_lmbs,
if (!start)
return -EINVAL;
end = &start[n_lmbs - 1];
end = &start[n_lmbs];
last_lmb = &drmem_info->lmbs[drmem_info->n_lmbs - 1];
if (end > last_lmb)
limit = &drmem_info->lmbs[drmem_info->n_lmbs];
if (end > limit)
return -EINVAL;
*start_lmb = start;

View File

@ -1992,7 +1992,7 @@ static int __init vpa_debugfs_init(void)
{
char name[16];
long i;
static struct dentry *vpa_dir;
struct dentry *vpa_dir;
if (!firmware_has_feature(FW_FEATURE_SPLPAR))
return 0;

View File

@ -68,13 +68,6 @@ static u32 xive_ipi_irq;
/* Xive state for each CPU */
static DEFINE_PER_CPU(struct xive_cpu *, xive_cpu);
/*
* A "disabled" interrupt should never fire, to catch problems
* we set its logical number to this
*/
#define XIVE_BAD_IRQ 0x7fffffff
#define XIVE_MAX_IRQ (XIVE_BAD_IRQ - 1)
/* An invalid CPU target */
#define XIVE_INVALID_TARGET (-1)
@ -265,11 +258,15 @@ notrace void xmon_xive_do_dump(int cpu)
int xmon_xive_get_irq_config(u32 hw_irq, struct irq_data *d)
{
struct irq_chip *chip = irq_data_get_irq_chip(d);
int rc;
u32 target;
u8 prio;
u32 lirq;
if (!is_xive_irq(chip))
return -EINVAL;
rc = xive_ops->get_irq_config(hw_irq, &target, &prio, &lirq);
if (rc) {
xmon_printf("IRQ 0x%08x : no config rc=%d\n", hw_irq, rc);
@ -1150,7 +1147,7 @@ static int xive_setup_cpu_ipi(unsigned int cpu)
xc = per_cpu(xive_cpu, cpu);
/* Check if we are already setup */
if (xc->hw_ipi != 0)
if (xc->hw_ipi != XIVE_BAD_IRQ)
return 0;
/* Grab an IPI from the backend, this will populate xc->hw_ipi */
@ -1187,7 +1184,7 @@ static void xive_cleanup_cpu_ipi(unsigned int cpu, struct xive_cpu *xc)
/* Disable the IPI and free the IRQ data */
/* Already cleaned up ? */
if (xc->hw_ipi == 0)
if (xc->hw_ipi == XIVE_BAD_IRQ)
return;
/* Mask the IPI */
@ -1343,6 +1340,7 @@ static int xive_prepare_cpu(unsigned int cpu)
if (np)
xc->chip_id = of_get_ibm_chip_id(np);
of_node_put(np);
xc->hw_ipi = XIVE_BAD_IRQ;
per_cpu(xive_cpu, cpu) = xc;
}

View File

@ -312,7 +312,7 @@ static void xive_native_put_ipi(unsigned int cpu, struct xive_cpu *xc)
s64 rc;
/* Free the IPI */
if (!xc->hw_ipi)
if (xc->hw_ipi == XIVE_BAD_IRQ)
return;
for (;;) {
rc = opal_xive_free_irq(xc->hw_ipi);
@ -320,7 +320,7 @@ static void xive_native_put_ipi(unsigned int cpu, struct xive_cpu *xc)
msleep(OPAL_BUSY_DELAY_MS);
continue;
}
xc->hw_ipi = 0;
xc->hw_ipi = XIVE_BAD_IRQ;
break;
}
}

View File

@ -560,11 +560,11 @@ static int xive_spapr_get_ipi(unsigned int cpu, struct xive_cpu *xc)
static void xive_spapr_put_ipi(unsigned int cpu, struct xive_cpu *xc)
{
if (!xc->hw_ipi)
if (xc->hw_ipi == XIVE_BAD_IRQ)
return;
xive_irq_bitmap_free(xc->hw_ipi);
xc->hw_ipi = 0;
xc->hw_ipi = XIVE_BAD_IRQ;
}
#endif /* CONFIG_SMP */

View File

@ -5,6 +5,13 @@
#ifndef __XIVE_INTERNAL_H
#define __XIVE_INTERNAL_H
/*
* A "disabled" interrupt should never fire, to catch problems
* we set its logical number to this
*/
#define XIVE_BAD_IRQ 0x7fffffff
#define XIVE_MAX_IRQ (XIVE_BAD_IRQ - 1)
/* Each CPU carry one of these with various per-CPU state */
struct xive_cpu {
#ifdef CONFIG_SMP

View File

@ -1,9 +1,6 @@
# SPDX-License-Identifier: GPL-2.0
# Makefile for xmon
# Avoid clang warnings around longjmp/setjmp declarations
subdir-ccflags-y := -ffreestanding
GCOV_PROFILE := n
KCOV_INSTRUMENT := n
UBSAN_SANITIZE := n

View File

@ -84,7 +84,7 @@ static int show_diag_stat(struct seq_file *m, void *v)
static void *show_diag_stat_start(struct seq_file *m, loff_t *pos)
{
return *pos <= nr_cpu_ids ? (void *)((unsigned long) *pos + 1) : NULL;
return *pos <= NR_DIAG_STAT ? (void *)((unsigned long) *pos + 1) : NULL;
}
static void *show_diag_stat_next(struct seq_file *m, void *v, loff_t *pos)

View File

@ -1202,6 +1202,7 @@ static int vsie_run(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
scb_s->iprcc = PGM_ADDRESSING;
scb_s->pgmilc = 4;
scb_s->gpsw.addr = __rewind_psw(scb_s->gpsw, 4);
rc = 1;
}
return rc;
}

View File

@ -787,14 +787,18 @@ static void gmap_call_notifier(struct gmap *gmap, unsigned long start,
static inline unsigned long *gmap_table_walk(struct gmap *gmap,
unsigned long gaddr, int level)
{
const int asce_type = gmap->asce & _ASCE_TYPE_MASK;
unsigned long *table;
if ((gmap->asce & _ASCE_TYPE_MASK) + 4 < (level * 4))
return NULL;
if (gmap_is_shadow(gmap) && gmap->removed)
return NULL;
if (gaddr & (-1UL << (31 + ((gmap->asce & _ASCE_TYPE_MASK) >> 2)*11)))
if (asce_type != _ASCE_TYPE_REGION1 &&
gaddr & (-1UL << (31 + (asce_type >> 2) * 11)))
return NULL;
table = gmap->table;
switch (gmap->asce & _ASCE_TYPE_MASK) {
case _ASCE_TYPE_REGION1:

View File

@ -106,7 +106,7 @@ ENTRY(startup_32)
notl %eax
andl %eax, %ebx
cmpl $LOAD_PHYSICAL_ADDR, %ebx
jge 1f
jae 1f
#endif
movl $LOAD_PHYSICAL_ADDR, %ebx
1:

View File

@ -106,7 +106,7 @@ ENTRY(startup_32)
notl %eax
andl %eax, %ebx
cmpl $LOAD_PHYSICAL_ADDR, %ebx
jge 1f
jae 1f
#endif
movl $LOAD_PHYSICAL_ADDR, %ebx
1:
@ -297,7 +297,7 @@ ENTRY(startup_64)
notq %rax
andq %rax, %rbp
cmpq $LOAD_PHYSICAL_ADDR, %rbp
jge 1f
jae 1f
#endif
movq $LOAD_PHYSICAL_ADDR, %rbp
1:

View File

@ -1647,6 +1647,7 @@ ENTRY(int3)
END(int3)
ENTRY(general_protection)
ASM_CLAC
pushl $do_general_protection
jmp common_exception
END(general_protection)

View File

@ -1130,7 +1130,7 @@ struct kvm_x86_ops {
bool (*pt_supported)(void);
bool (*pku_supported)(void);
int (*check_nested_events)(struct kvm_vcpu *vcpu, bool external_intr);
int (*check_nested_events)(struct kvm_vcpu *vcpu);
void (*request_immediate_exit)(struct kvm_vcpu *vcpu);
void (*sched_in)(struct kvm_vcpu *kvm, int cpu);

View File

@ -624,12 +624,15 @@ static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
return __pmd(val);
}
/* mprotect needs to preserve PAT bits when updating vm_page_prot */
/*
* mprotect needs to preserve PAT and encryption bits when updating
* vm_page_prot
*/
#define pgprot_modify pgprot_modify
static inline pgprot_t pgprot_modify(pgprot_t oldprot, pgprot_t newprot)
{
pgprotval_t preservebits = pgprot_val(oldprot) & _PAGE_CHG_MASK;
pgprotval_t addbits = pgprot_val(newprot);
pgprotval_t addbits = pgprot_val(newprot) & ~_PAGE_CHG_MASK;
return __pgprot(preservebits | addbits);
}

View File

@ -123,7 +123,7 @@
*/
#define _PAGE_CHG_MASK (PTE_PFN_MASK | _PAGE_PCD | _PAGE_PWT | \
_PAGE_SPECIAL | _PAGE_ACCESSED | _PAGE_DIRTY | \
_PAGE_SOFT_DIRTY | _PAGE_DEVMAP)
_PAGE_SOFT_DIRTY | _PAGE_DEVMAP | _PAGE_ENC)
#define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE)
/*

View File

@ -1740,7 +1740,7 @@ int __acpi_acquire_global_lock(unsigned int *lock)
new = (((old & ~0x3) + 2) + ((old >> 1) & 0x1));
val = cmpxchg(lock, old, new);
} while (unlikely (val != old));
return (new < 3) ? -1 : 0;
return ((new & 0x3) < 3) ? -1 : 0;
}
int __acpi_release_global_lock(unsigned int *lock)

View File

@ -15,18 +15,46 @@
#include <asm/param.h>
#include <asm/tsc.h>
#define MAX_NUM_FREQS 9
#define MAX_NUM_FREQS 16 /* 4 bits to select the frequency */
/*
* The frequency numbers in the SDM are e.g. 83.3 MHz, which does not contain a
* lot of accuracy which leads to clock drift. As far as we know Bay Trail SoCs
* use a 25 MHz crystal and Cherry Trail uses a 19.2 MHz crystal, the crystal
* is the source clk for a root PLL which outputs 1600 and 100 MHz. It is
* unclear if the root PLL outputs are used directly by the CPU clock PLL or
* if there is another PLL in between.
* This does not matter though, we can model the chain of PLLs as a single PLL
* with a quotient equal to the quotients of all PLLs in the chain multiplied.
* So we can create a simplified model of the CPU clock setup using a reference
* clock of 100 MHz plus a quotient which gets us as close to the frequency
* from the SDM as possible.
* For the 83.3 MHz example from above this would give us 100 MHz * 5 / 6 =
* 83 and 1/3 MHz, which matches exactly what has been measured on actual hw.
*/
#define TSC_REFERENCE_KHZ 100000
struct muldiv {
u32 multiplier;
u32 divider;
};
/*
* If MSR_PERF_STAT[31] is set, the maximum resolved bus ratio can be
* read in MSR_PLATFORM_ID[12:8], otherwise in MSR_PERF_STAT[44:40].
* Unfortunately some Intel Atom SoCs aren't quite compliant to this,
* so we need manually differentiate SoC families. This is what the
* field msr_plat does.
* field use_msr_plat does.
*/
struct freq_desc {
u8 msr_plat; /* 1: use MSR_PLATFORM_INFO, 0: MSR_IA32_PERF_STATUS */
bool use_msr_plat;
struct muldiv muldiv[MAX_NUM_FREQS];
/*
* Some CPU frequencies in the SDM do not map to known PLL freqs, in
* that case the muldiv array is empty and the freqs array is used.
*/
u32 freqs[MAX_NUM_FREQS];
u32 mask;
};
/*
@ -35,31 +63,81 @@ struct freq_desc {
* by MSR based on SDM.
*/
static const struct freq_desc freq_desc_pnw = {
0, { 0, 0, 0, 0, 0, 99840, 0, 83200 }
.use_msr_plat = false,
.freqs = { 0, 0, 0, 0, 0, 99840, 0, 83200 },
.mask = 0x07,
};
static const struct freq_desc freq_desc_clv = {
0, { 0, 133200, 0, 0, 0, 99840, 0, 83200 }
.use_msr_plat = false,
.freqs = { 0, 133200, 0, 0, 0, 99840, 0, 83200 },
.mask = 0x07,
};
/*
* Bay Trail SDM MSR_FSB_FREQ frequencies simplified PLL model:
* 000: 100 * 5 / 6 = 83.3333 MHz
* 001: 100 * 1 / 1 = 100.0000 MHz
* 010: 100 * 4 / 3 = 133.3333 MHz
* 011: 100 * 7 / 6 = 116.6667 MHz
* 100: 100 * 4 / 5 = 80.0000 MHz
*/
static const struct freq_desc freq_desc_byt = {
1, { 83300, 100000, 133300, 116700, 80000, 0, 0, 0 }
.use_msr_plat = true,
.muldiv = { { 5, 6 }, { 1, 1 }, { 4, 3 }, { 7, 6 },
{ 4, 5 } },
.mask = 0x07,
};
/*
* Cherry Trail SDM MSR_FSB_FREQ frequencies simplified PLL model:
* 0000: 100 * 5 / 6 = 83.3333 MHz
* 0001: 100 * 1 / 1 = 100.0000 MHz
* 0010: 100 * 4 / 3 = 133.3333 MHz
* 0011: 100 * 7 / 6 = 116.6667 MHz
* 0100: 100 * 4 / 5 = 80.0000 MHz
* 0101: 100 * 14 / 15 = 93.3333 MHz
* 0110: 100 * 9 / 10 = 90.0000 MHz
* 0111: 100 * 8 / 9 = 88.8889 MHz
* 1000: 100 * 7 / 8 = 87.5000 MHz
*/
static const struct freq_desc freq_desc_cht = {
1, { 83300, 100000, 133300, 116700, 80000, 93300, 90000, 88900, 87500 }
.use_msr_plat = true,
.muldiv = { { 5, 6 }, { 1, 1 }, { 4, 3 }, { 7, 6 },
{ 4, 5 }, { 14, 15 }, { 9, 10 }, { 8, 9 },
{ 7, 8 } },
.mask = 0x0f,
};
/*
* Merriefield SDM MSR_FSB_FREQ frequencies simplified PLL model:
* 0001: 100 * 1 / 1 = 100.0000 MHz
* 0010: 100 * 4 / 3 = 133.3333 MHz
*/
static const struct freq_desc freq_desc_tng = {
1, { 0, 100000, 133300, 0, 0, 0, 0, 0 }
.use_msr_plat = true,
.muldiv = { { 0, 0 }, { 1, 1 }, { 4, 3 } },
.mask = 0x07,
};
/*
* Moorefield SDM MSR_FSB_FREQ frequencies simplified PLL model:
* 0000: 100 * 5 / 6 = 83.3333 MHz
* 0001: 100 * 1 / 1 = 100.0000 MHz
* 0010: 100 * 4 / 3 = 133.3333 MHz
* 0011: 100 * 1 / 1 = 100.0000 MHz
*/
static const struct freq_desc freq_desc_ann = {
1, { 83300, 100000, 133300, 100000, 0, 0, 0, 0 }
.use_msr_plat = true,
.muldiv = { { 5, 6 }, { 1, 1 }, { 4, 3 }, { 1, 1 } },
.mask = 0x0f,
};
/* 24 MHz crystal? : 24 * 13 / 4 = 78 MHz */
static const struct freq_desc freq_desc_lgm = {
1, { 78000, 78000, 78000, 78000, 78000, 78000, 78000, 78000 }
.use_msr_plat = true,
.freqs = { 78000, 78000, 78000, 78000, 78000, 78000, 78000, 78000 },
.mask = 0x0f,
};
static const struct x86_cpu_id tsc_msr_cpu_ids[] = {
@ -81,17 +159,19 @@ static const struct x86_cpu_id tsc_msr_cpu_ids[] = {
*/
unsigned long cpu_khz_from_msr(void)
{
u32 lo, hi, ratio, freq;
u32 lo, hi, ratio, freq, tscref;
const struct freq_desc *freq_desc;
const struct x86_cpu_id *id;
const struct muldiv *md;
unsigned long res;
int index;
id = x86_match_cpu(tsc_msr_cpu_ids);
if (!id)
return 0;
freq_desc = (struct freq_desc *)id->driver_data;
if (freq_desc->msr_plat) {
if (freq_desc->use_msr_plat) {
rdmsr(MSR_PLATFORM_INFO, lo, hi);
ratio = (lo >> 8) & 0xff;
} else {
@ -101,12 +181,28 @@ unsigned long cpu_khz_from_msr(void)
/* Get FSB FREQ ID */
rdmsr(MSR_FSB_FREQ, lo, hi);
index = lo & freq_desc->mask;
md = &freq_desc->muldiv[index];
/* Map CPU reference clock freq ID(0-7) to CPU reference clock freq(KHz) */
freq = freq_desc->freqs[lo & 0x7];
/*
* Note this also catches cases where the index points to an unpopulated
* part of muldiv, in that case the else will set freq and res to 0.
*/
if (md->divider) {
tscref = TSC_REFERENCE_KHZ * md->multiplier;
freq = DIV_ROUND_CLOSEST(tscref, md->divider);
/*
* Multiplying by ratio before the division has better
* accuracy than just calculating freq * ratio.
*/
res = DIV_ROUND_CLOSEST(tscref * ratio, md->divider);
} else {
freq = freq_desc->freqs[index];
res = freq * ratio;
}
/* TSC frequency = maximum resolved freq * maximum resolved bus ratio */
res = freq * ratio;
if (freq == 0)
pr_err("Error MSR_FSB_FREQ index %d is unknown\n", index);
#ifdef CONFIG_X86_LOCAL_APIC
lapic_timer_period = (freq * 1000) / HZ;

View File

@ -1926,6 +1926,10 @@ static struct kvm *svm_vm_alloc(void)
struct kvm_svm *kvm_svm = __vmalloc(sizeof(struct kvm_svm),
GFP_KERNEL_ACCOUNT | __GFP_ZERO,
PAGE_KERNEL);
if (!kvm_svm)
return NULL;
return &kvm_svm->kvm;
}

View File

@ -3460,7 +3460,7 @@ static void nested_vmx_inject_exception_vmexit(struct kvm_vcpu *vcpu,
nested_vmx_vmexit(vcpu, EXIT_REASON_EXCEPTION_NMI, intr_info, exit_qual);
}
static int vmx_check_nested_events(struct kvm_vcpu *vcpu, bool external_intr)
static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
{
struct vcpu_vmx *vmx = to_vmx(vcpu);
unsigned long exit_qual;
@ -3507,8 +3507,7 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu, bool external_intr)
return 0;
}
if ((kvm_cpu_has_interrupt(vcpu) || external_intr) &&
nested_exit_on_intr(vcpu)) {
if (kvm_cpu_has_interrupt(vcpu) && nested_exit_on_intr(vcpu)) {
if (block_nested_events)
return -EBUSY;
nested_vmx_vmexit(vcpu, EXIT_REASON_EXTERNAL_INTERRUPT, 0, 0);
@ -4158,17 +4157,8 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason,
vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
if (likely(!vmx->fail)) {
/*
* TODO: SDM says that with acknowledge interrupt on
* exit, bit 31 of the VM-exit interrupt information
* (valid interrupt) is always set to 1 on
* EXIT_REASON_EXTERNAL_INTERRUPT, so we shouldn't
* need kvm_cpu_has_interrupt(). See the commit
* message for details.
*/
if (nested_exit_intr_ack_set(vcpu) &&
exit_reason == EXIT_REASON_EXTERNAL_INTERRUPT &&
kvm_cpu_has_interrupt(vcpu)) {
if (exit_reason == EXIT_REASON_EXTERNAL_INTERRUPT &&
nested_exit_intr_ack_set(vcpu)) {
int irq = kvm_cpu_get_interrupt(vcpu);
WARN_ON(irq < 0);
vmcs12->vm_exit_intr_info = irq |

View File

@ -12,7 +12,8 @@
#define __ex(x) __kvm_handle_fault_on_reboot(x)
asmlinkage void vmread_error(unsigned long field, bool fault);
__attribute__((regparm(0))) void vmread_error_trampoline(unsigned long field,
bool fault);
void vmwrite_error(unsigned long field, unsigned long value);
void vmclear_error(struct vmcs *vmcs, u64 phys_addr);
void vmptrld_error(struct vmcs *vmcs, u64 phys_addr);
@ -70,15 +71,28 @@ static __always_inline unsigned long __vmcs_readl(unsigned long field)
asm volatile("1: vmread %2, %1\n\t"
".byte 0x3e\n\t" /* branch taken hint */
"ja 3f\n\t"
"mov %2, %%" _ASM_ARG1 "\n\t"
"xor %%" _ASM_ARG2 ", %%" _ASM_ARG2 "\n\t"
"2: call vmread_error\n\t"
"xor %k1, %k1\n\t"
/*
* VMREAD failed. Push '0' for @fault, push the failing
* @field, and bounce through the trampoline to preserve
* volatile registers.
*/
"push $0\n\t"
"push %2\n\t"
"2:call vmread_error_trampoline\n\t"
/*
* Unwind the stack. Note, the trampoline zeros out the
* memory for @fault so that the result is '0' on error.
*/
"pop %2\n\t"
"pop %1\n\t"
"3:\n\t"
/* VMREAD faulted. As above, except push '1' for @fault. */
".pushsection .fixup, \"ax\"\n\t"
"4: mov %2, %%" _ASM_ARG1 "\n\t"
"mov $1, %%" _ASM_ARG2 "\n\t"
"4: push $1\n\t"
"push %2\n\t"
"jmp 2b\n\t"
".popsection\n\t"
_ASM_EXTABLE(1b, 4b)

View File

@ -234,3 +234,61 @@ ENTRY(__vmx_vcpu_run)
2: mov $1, %eax
jmp 1b
ENDPROC(__vmx_vcpu_run)
/**
* vmread_error_trampoline - Trampoline from inline asm to vmread_error()
* @field: VMCS field encoding that failed
* @fault: %true if the VMREAD faulted, %false if it failed
* Save and restore volatile registers across a call to vmread_error(). Note,
* all parameters are passed on the stack.
*/
ENTRY(vmread_error_trampoline)
push %_ASM_BP
mov %_ASM_SP, %_ASM_BP
push %_ASM_AX
push %_ASM_CX
push %_ASM_DX
#ifdef CONFIG_X86_64
push %rdi
push %rsi
push %r8
push %r9
push %r10
push %r11
#endif
#ifdef CONFIG_X86_64
/* Load @field and @fault to arg1 and arg2 respectively. */
mov 3*WORD_SIZE(%rbp), %_ASM_ARG2
mov 2*WORD_SIZE(%rbp), %_ASM_ARG1
#else
/* Parameters are passed on the stack for 32-bit (see asmlinkage). */
push 3*WORD_SIZE(%ebp)
push 2*WORD_SIZE(%ebp)
#endif
call vmread_error
#ifndef CONFIG_X86_64
add $8, %esp
#endif
/* Zero out @fault, which will be popped into the result register. */
_ASM_MOV $0, 3*WORD_SIZE(%_ASM_BP)
#ifdef CONFIG_X86_64
pop %r11
pop %r10
pop %r9
pop %r8
pop %rsi
pop %rdi
#endif
pop %_ASM_DX
pop %_ASM_CX
pop %_ASM_AX
pop %_ASM_BP
ret
ENDPROC(vmread_error_trampoline)

View File

@ -648,43 +648,15 @@ void loaded_vmcs_init(struct loaded_vmcs *loaded_vmcs)
}
#ifdef CONFIG_KEXEC_CORE
/*
* This bitmap is used to indicate whether the vmclear
* operation is enabled on all cpus. All disabled by
* default.
*/
static cpumask_t crash_vmclear_enabled_bitmap = CPU_MASK_NONE;
static inline void crash_enable_local_vmclear(int cpu)
{
cpumask_set_cpu(cpu, &crash_vmclear_enabled_bitmap);
}
static inline void crash_disable_local_vmclear(int cpu)
{
cpumask_clear_cpu(cpu, &crash_vmclear_enabled_bitmap);
}
static inline int crash_local_vmclear_enabled(int cpu)
{
return cpumask_test_cpu(cpu, &crash_vmclear_enabled_bitmap);
}
static void crash_vmclear_local_loaded_vmcss(void)
{
int cpu = raw_smp_processor_id();
struct loaded_vmcs *v;
if (!crash_local_vmclear_enabled(cpu))
return;
list_for_each_entry(v, &per_cpu(loaded_vmcss_on_cpu, cpu),
loaded_vmcss_on_cpu_link)
vmcs_clear(v->vmcs);
}
#else
static inline void crash_enable_local_vmclear(int cpu) { }
static inline void crash_disable_local_vmclear(int cpu) { }
#endif /* CONFIG_KEXEC_CORE */
static void __loaded_vmcs_clear(void *arg)
@ -696,19 +668,24 @@ static void __loaded_vmcs_clear(void *arg)
return; /* vcpu migration can race with cpu offline */
if (per_cpu(current_vmcs, cpu) == loaded_vmcs->vmcs)
per_cpu(current_vmcs, cpu) = NULL;
crash_disable_local_vmclear(cpu);
vmcs_clear(loaded_vmcs->vmcs);
if (loaded_vmcs->shadow_vmcs && loaded_vmcs->launched)
vmcs_clear(loaded_vmcs->shadow_vmcs);
list_del(&loaded_vmcs->loaded_vmcss_on_cpu_link);
/*
* we should ensure updating loaded_vmcs->loaded_vmcss_on_cpu_link
* is before setting loaded_vmcs->vcpu to -1 which is done in
* loaded_vmcs_init. Otherwise, other cpu can see vcpu = -1 fist
* then adds the vmcs into percpu list before it is deleted.
* Ensure all writes to loaded_vmcs, including deleting it from its
* current percpu list, complete before setting loaded_vmcs->vcpu to
* -1, otherwise a different cpu can see vcpu == -1 first and add
* loaded_vmcs to its percpu list before it's deleted from this cpu's
* list. Pairs with the smp_rmb() in vmx_vcpu_load_vmcs().
*/
smp_wmb();
loaded_vmcs_init(loaded_vmcs);
crash_enable_local_vmclear(cpu);
loaded_vmcs->cpu = -1;
loaded_vmcs->launched = 0;
}
void loaded_vmcs_clear(struct loaded_vmcs *loaded_vmcs)
@ -1317,18 +1294,17 @@ void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu)
if (!already_loaded) {
loaded_vmcs_clear(vmx->loaded_vmcs);
local_irq_disable();
crash_disable_local_vmclear(cpu);
/*
* Read loaded_vmcs->cpu should be before fetching
* loaded_vmcs->loaded_vmcss_on_cpu_link.
* See the comments in __loaded_vmcs_clear().
* Ensure loaded_vmcs->cpu is read before adding loaded_vmcs to
* this cpu's percpu list, otherwise it may not yet be deleted
* from its previous cpu's percpu list. Pairs with the
* smb_wmb() in __loaded_vmcs_clear().
*/
smp_rmb();
list_add(&vmx->loaded_vmcs->loaded_vmcss_on_cpu_link,
&per_cpu(loaded_vmcss_on_cpu, cpu));
crash_enable_local_vmclear(cpu);
local_irq_enable();
}
@ -2252,21 +2228,6 @@ static int hardware_enable(void)
!hv_get_vp_assist_page(cpu))
return -EFAULT;
INIT_LIST_HEAD(&per_cpu(loaded_vmcss_on_cpu, cpu));
INIT_LIST_HEAD(&per_cpu(blocked_vcpu_on_cpu, cpu));
spin_lock_init(&per_cpu(blocked_vcpu_on_cpu_lock, cpu));
/*
* Now we can enable the vmclear operation in kdump
* since the loaded_vmcss_on_cpu list on this cpu
* has been initialized.
*
* Though the cpu is not in VMX operation now, there
* is no problem to enable the vmclear operation
* for the loaded_vmcss_on_cpu list is empty!
*/
crash_enable_local_vmclear(cpu);
rdmsrl(MSR_IA32_FEATURE_CONTROL, old);
test_bits = FEATURE_CONTROL_LOCKED;
@ -4505,8 +4466,13 @@ static int vmx_nmi_allowed(struct kvm_vcpu *vcpu)
static int vmx_interrupt_allowed(struct kvm_vcpu *vcpu)
{
return (!to_vmx(vcpu)->nested.nested_run_pending &&
vmcs_readl(GUEST_RFLAGS) & X86_EFLAGS_IF) &&
if (to_vmx(vcpu)->nested.nested_run_pending)
return false;
if (is_guest_mode(vcpu) && nested_exit_on_intr(vcpu))
return true;
return (vmcs_readl(GUEST_RFLAGS) & X86_EFLAGS_IF) &&
!(vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) &
(GUEST_INTR_STATE_STI | GUEST_INTR_STATE_MOV_SS));
}
@ -6684,6 +6650,10 @@ static struct kvm *vmx_vm_alloc(void)
struct kvm_vmx *kvm_vmx = __vmalloc(sizeof(struct kvm_vmx),
GFP_KERNEL_ACCOUNT | __GFP_ZERO,
PAGE_KERNEL);
if (!kvm_vmx)
return NULL;
return &kvm_vmx->kvm;
}
@ -8022,7 +7992,7 @@ module_exit(vmx_exit);
static int __init vmx_init(void)
{
int r;
int r, cpu;
#if IS_ENABLED(CONFIG_HYPERV)
/*
@ -8076,6 +8046,12 @@ static int __init vmx_init(void)
return r;
}
for_each_possible_cpu(cpu) {
INIT_LIST_HEAD(&per_cpu(loaded_vmcss_on_cpu, cpu));
INIT_LIST_HEAD(&per_cpu(blocked_vcpu_on_cpu, cpu));
spin_lock_init(&per_cpu(blocked_vcpu_on_cpu_lock, cpu));
}
#ifdef CONFIG_KEXEC_CORE
rcu_assign_pointer(crash_vmclear_loaded_vmcss,
crash_vmclear_local_loaded_vmcss);

View File

@ -7555,7 +7555,7 @@ static void update_cr8_intercept(struct kvm_vcpu *vcpu)
kvm_x86_ops->update_cr8_intercept(vcpu, tpr, max_irr);
}
static int inject_pending_event(struct kvm_vcpu *vcpu, bool req_int_win)
static int inject_pending_event(struct kvm_vcpu *vcpu)
{
int r;
@ -7591,7 +7591,7 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool req_int_win)
* from L2 to L1.
*/
if (is_guest_mode(vcpu) && kvm_x86_ops->check_nested_events) {
r = kvm_x86_ops->check_nested_events(vcpu, req_int_win);
r = kvm_x86_ops->check_nested_events(vcpu);
if (r != 0)
return r;
}
@ -7653,7 +7653,7 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool req_int_win)
* KVM_REQ_EVENT only on certain events and not unconditionally?
*/
if (is_guest_mode(vcpu) && kvm_x86_ops->check_nested_events) {
r = kvm_x86_ops->check_nested_events(vcpu, req_int_win);
r = kvm_x86_ops->check_nested_events(vcpu);
if (r != 0)
return r;
}
@ -8130,7 +8130,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
goto out;
}
if (inject_pending_event(vcpu, req_int_win) != 0)
if (inject_pending_event(vcpu) != 0)
req_immediate_exit = true;
else {
/* Enable SMI/NMI/IRQ window open exits if needed.
@ -8360,7 +8360,7 @@ static inline int vcpu_block(struct kvm *kvm, struct kvm_vcpu *vcpu)
static inline bool kvm_vcpu_running(struct kvm_vcpu *vcpu)
{
if (is_guest_mode(vcpu) && kvm_x86_ops->check_nested_events)
kvm_x86_ops->check_nested_events(vcpu, false);
kvm_x86_ops->check_nested_events(vcpu);
return (vcpu->arch.mp_state == KVM_MP_STATE_RUNNABLE &&
!vcpu->arch.apf.halted);
@ -9726,6 +9726,13 @@ int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
{
int i;
/*
* Clear out the previous array pointers for the KVM_MR_MOVE case. The
* old arrays will be freed by __kvm_set_memory_region() if installing
* the new memslot is successful.
*/
memset(&slot->arch, 0, sizeof(slot->arch));
for (i = 0; i < KVM_NR_PAGE_SIZES; ++i) {
struct kvm_lpage_info *linfo;
unsigned long ugfn;
@ -9807,6 +9814,10 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
const struct kvm_userspace_memory_region *mem,
enum kvm_mr_change change)
{
if (change == KVM_MR_MOVE)
return kvm_arch_create_memslot(kvm, memslot,
mem->memory_size >> PAGE_SHIFT);
return 0;
}

View File

@ -85,6 +85,8 @@ static const unsigned long * const efi_tables[] = {
#ifdef CONFIG_EFI_RCI2_TABLE
&rci2_table_phys,
#endif
&efi.tpm_log,
&efi.tpm_final_log,
};
u64 efi_setup; /* efi setup_data physical address */

View File

@ -834,7 +834,7 @@ efi_thunk_set_variable(efi_char16_t *name, efi_guid_t *vendor,
phys_vendor = virt_to_phys_or_null(vnd);
phys_data = virt_to_phys_or_null_size(data, data_size);
if (!phys_name || !phys_data)
if (!phys_name || (data && !phys_data))
status = EFI_INVALID_PARAMETER;
else
status = efi_thunk(set_variable, phys_name, phys_vendor,
@ -865,7 +865,7 @@ efi_thunk_set_variable_nonblocking(efi_char16_t *name, efi_guid_t *vendor,
phys_vendor = virt_to_phys_or_null(vnd);
phys_data = virt_to_phys_or_null_size(data, data_size);
if (!phys_name || !phys_data)
if (!phys_name || (data && !phys_data))
status = EFI_INVALID_PARAMETER;
else
status = efi_thunk(set_variable, phys_name, phys_vendor,

View File

@ -625,6 +625,12 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
{
struct bfq_entity *entity = &bfqq->entity;
/*
* Get extra reference to prevent bfqq from being freed in
* next possible expire or deactivate.
*/
bfqq->ref++;
/* If bfqq is empty, then bfq_bfqq_expire also invokes
* bfq_del_bfqq_busy, thereby removing bfqq and its entity
* from data structures related to current group. Otherwise we
@ -635,12 +641,6 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
bfq_bfqq_expire(bfqd, bfqd->in_service_queue,
false, BFQQE_PREEMPTED);
/*
* get extra reference to prevent bfqq from being freed in
* next possible deactivate
*/
bfqq->ref++;
if (bfq_bfqq_busy(bfqq))
bfq_deactivate_bfqq(bfqd, bfqq, false, false);
else if (entity->on_st)
@ -660,7 +660,7 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
if (!bfqd->in_service_queue && !bfqd->rq_in_driver)
bfq_schedule_dispatch(bfqd);
/* release extra ref taken above */
/* release extra ref taken above, bfqq may happen to be freed now */
bfq_put_queue(bfqq);
}

View File

@ -6210,20 +6210,28 @@ static struct bfq_queue *bfq_init_rq(struct request *rq)
return bfqq;
}
static void bfq_idle_slice_timer_body(struct bfq_queue *bfqq)
static void
bfq_idle_slice_timer_body(struct bfq_data *bfqd, struct bfq_queue *bfqq)
{
struct bfq_data *bfqd = bfqq->bfqd;
enum bfqq_expiration reason;
unsigned long flags;
spin_lock_irqsave(&bfqd->lock, flags);
bfq_clear_bfqq_wait_request(bfqq);
/*
* Considering that bfqq may be in race, we should firstly check
* whether bfqq is in service before doing something on it. If
* the bfqq in race is not in service, it has already been expired
* through __bfq_bfqq_expire func and its wait_request flags has
* been cleared in __bfq_bfqd_reset_in_service func.
*/
if (bfqq != bfqd->in_service_queue) {
spin_unlock_irqrestore(&bfqd->lock, flags);
return;
}
bfq_clear_bfqq_wait_request(bfqq);
if (bfq_bfqq_budget_timeout(bfqq))
/*
* Also here the queue can be safely expired
@ -6268,7 +6276,7 @@ static enum hrtimer_restart bfq_idle_slice_timer(struct hrtimer *timer)
* early.
*/
if (bfqq)
bfq_idle_slice_timer_body(bfqq);
bfq_idle_slice_timer_body(bfqd, bfqq);
return HRTIMER_NORESTART;
}

View File

@ -84,6 +84,7 @@ static void ioc_destroy_icq(struct io_cq *icq)
* making it impossible to determine icq_cache. Record it in @icq.
*/
icq->__rcu_icq_cache = et->icq_cache;
icq->flags |= ICQ_DESTROYED;
call_rcu(&icq->__rcu_head, icq_free_icq_rcu);
}
@ -212,15 +213,21 @@ static void __ioc_clear_queue(struct list_head *icq_list)
{
unsigned long flags;
rcu_read_lock();
while (!list_empty(icq_list)) {
struct io_cq *icq = list_entry(icq_list->next,
struct io_cq, q_node);
struct io_context *ioc = icq->ioc;
spin_lock_irqsave(&ioc->lock, flags);
if (icq->flags & ICQ_DESTROYED) {
spin_unlock_irqrestore(&ioc->lock, flags);
continue;
}
ioc_destroy_icq(icq);
spin_unlock_irqrestore(&ioc->lock, flags);
}
rcu_read_unlock();
}
/**

View File

@ -664,6 +664,9 @@ void disk_stack_limits(struct gendisk *disk, struct block_device *bdev,
printk(KERN_NOTICE "%s: Warning: Device %s is misaligned\n",
top, bottom);
}
t->backing_dev_info->io_pages =
t->limits.max_sectors >> (PAGE_SHIFT - 9);
}
EXPORT_SYMBOL(disk_stack_limits);

View File

@ -37,12 +37,16 @@ int crypto_rng_reset(struct crypto_rng *tfm, const u8 *seed, unsigned int slen)
crypto_stats_get(alg);
if (!seed && slen) {
buf = kmalloc(slen, GFP_KERNEL);
if (!buf)
if (!buf) {
crypto_alg_put(alg);
return -ENOMEM;
}
err = get_random_bytes_wait(buf, slen);
if (err)
if (err) {
crypto_alg_put(alg);
goto out;
}
seed = buf;
}

View File

@ -101,7 +101,7 @@ acpi_status acpi_hw_enable_all_runtime_gpes(void);
acpi_status acpi_hw_enable_all_wakeup_gpes(void);
u8 acpi_hw_check_all_gpes(void);
u8 acpi_hw_check_all_gpes(acpi_handle gpe_skip_device, u32 gpe_skip_number);
acpi_status
acpi_hw_enable_runtime_gpe_block(struct acpi_gpe_xrupt_info *gpe_xrupt_info,

View File

@ -799,17 +799,19 @@ ACPI_EXPORT_SYMBOL(acpi_enable_all_wakeup_gpes)
*
* FUNCTION: acpi_any_gpe_status_set
*
* PARAMETERS: None
* PARAMETERS: gpe_skip_number - Number of the GPE to skip
*
* RETURN: Whether or not the status bit is set for any GPE
*
* DESCRIPTION: Check the status bits of all enabled GPEs and return TRUE if any
* of them is set or FALSE otherwise.
* DESCRIPTION: Check the status bits of all enabled GPEs, except for the one
* represented by the "skip" argument, and return TRUE if any of
* them is set or FALSE otherwise.
*
******************************************************************************/
u32 acpi_any_gpe_status_set(void)
u32 acpi_any_gpe_status_set(u32 gpe_skip_number)
{
acpi_status status;
acpi_handle gpe_device;
u8 ret;
ACPI_FUNCTION_TRACE(acpi_any_gpe_status_set);
@ -819,7 +821,12 @@ u32 acpi_any_gpe_status_set(void)
return (FALSE);
}
ret = acpi_hw_check_all_gpes();
status = acpi_get_gpe_device(gpe_skip_number, &gpe_device);
if (ACPI_FAILURE(status)) {
gpe_device = NULL;
}
ret = acpi_hw_check_all_gpes(gpe_device, gpe_skip_number);
(void)acpi_ut_release_mutex(ACPI_MTX_EVENTS);
return (ret);

View File

@ -444,12 +444,19 @@ acpi_hw_enable_wakeup_gpe_block(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
return (AE_OK);
}
struct acpi_gpe_block_status_context {
struct acpi_gpe_register_info *gpe_skip_register_info;
u8 gpe_skip_mask;
u8 retval;
};
/******************************************************************************
*
* FUNCTION: acpi_hw_get_gpe_block_status
*
* PARAMETERS: gpe_xrupt_info - GPE Interrupt info
* gpe_block - Gpe Block info
* context - GPE list walk context data
*
* RETURN: Success
*
@ -460,12 +467,13 @@ acpi_hw_enable_wakeup_gpe_block(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
static acpi_status
acpi_hw_get_gpe_block_status(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
struct acpi_gpe_block_info *gpe_block,
void *ret_ptr)
void *context)
{
struct acpi_gpe_block_status_context *c = context;
struct acpi_gpe_register_info *gpe_register_info;
u64 in_enable, in_status;
acpi_status status;
u8 *ret = ret_ptr;
u8 ret_mask;
u32 i;
/* Examine each GPE Register within the block */
@ -485,7 +493,11 @@ acpi_hw_get_gpe_block_status(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
continue;
}
*ret |= in_enable & in_status;
ret_mask = in_enable & in_status;
if (ret_mask && c->gpe_skip_register_info == gpe_register_info) {
ret_mask &= ~c->gpe_skip_mask;
}
c->retval |= ret_mask;
}
return (AE_OK);
@ -561,24 +573,41 @@ acpi_status acpi_hw_enable_all_wakeup_gpes(void)
*
* FUNCTION: acpi_hw_check_all_gpes
*
* PARAMETERS: None
* PARAMETERS: gpe_skip_device - GPE devoce of the GPE to skip
* gpe_skip_number - Number of the GPE to skip
*
* RETURN: Combined status of all GPEs
*
* DESCRIPTION: Check all enabled GPEs in all GPE blocks and return TRUE if the
* DESCRIPTION: Check all enabled GPEs in all GPE blocks, except for the one
* represented by the "skip" arguments, and return TRUE if the
* status bit is set for at least one of them of FALSE otherwise.
*
******************************************************************************/
u8 acpi_hw_check_all_gpes(void)
u8 acpi_hw_check_all_gpes(acpi_handle gpe_skip_device, u32 gpe_skip_number)
{
u8 ret = 0;
struct acpi_gpe_block_status_context context = {
.gpe_skip_register_info = NULL,
.retval = 0,
};
struct acpi_gpe_event_info *gpe_event_info;
acpi_cpu_flags flags;
ACPI_FUNCTION_TRACE(acpi_hw_check_all_gpes);
(void)acpi_ev_walk_gpe_list(acpi_hw_get_gpe_block_status, &ret);
flags = acpi_os_acquire_lock(acpi_gbl_gpe_lock);
return (ret != 0);
gpe_event_info = acpi_ev_get_gpe_event_info(gpe_skip_device,
gpe_skip_number);
if (gpe_event_info) {
context.gpe_skip_register_info = gpe_event_info->register_info;
context.gpe_skip_mask = acpi_hw_get_gpe_register_bit(gpe_event_info);
}
acpi_os_release_lock(acpi_gbl_gpe_lock, flags);
(void)acpi_ev_walk_gpe_list(acpi_hw_get_gpe_block_status, &context);
return (context.retval != 0);
}
#endif /* !ACPI_REDUCED_HARDWARE */

View File

@ -1573,7 +1573,6 @@ static int acpi_ec_add(struct acpi_device *device)
if (boot_ec && ec->command_addr == boot_ec->command_addr &&
ec->data_addr == boot_ec->data_addr) {
boot_ec_is_ecdt = false;
/*
* Trust PNP0C09 namespace location rather than
* ECDT ID. But trust ECDT GPE rather than _GPE
@ -1593,9 +1592,12 @@ static int acpi_ec_add(struct acpi_device *device)
if (ec == boot_ec)
acpi_handle_info(boot_ec->handle,
"Boot %s EC used to handle transactions and events\n",
"Boot %s EC initialization complete\n",
boot_ec_is_ecdt ? "ECDT" : "DSDT");
acpi_handle_info(ec->handle,
"EC: Used to handle transactions and events\n");
device->driver_data = ec;
ret = !!request_region(ec->data_addr, 1, "EC data");
@ -1962,6 +1964,11 @@ void acpi_ec_set_gpe_wake_mask(u8 action)
acpi_set_gpe_wake_mask(NULL, first_ec->gpe, action);
}
bool acpi_ec_other_gpes_active(void)
{
return acpi_any_gpe_status_set(first_ec ? first_ec->gpe : U32_MAX);
}
bool acpi_ec_dispatch_gpe(void)
{
u32 ret;

View File

@ -201,6 +201,7 @@ void acpi_ec_remove_query_handler(struct acpi_ec *ec, u8 query_bit);
#ifdef CONFIG_PM_SLEEP
void acpi_ec_flush_work(void);
bool acpi_ec_other_gpes_active(void);
bool acpi_ec_dispatch_gpe(void);
#endif

View File

@ -1014,18 +1014,19 @@ static bool acpi_s2idle_wake(void)
return true;
/*
* If there are no EC events to process and at least one of the
* other enabled GPEs is active, the wakeup is regarded as a
* genuine one.
*
* Note that the checks below must be carried out in this order
* to avoid returning prematurely due to a change of the EC GPE
* status bit from unset to set between the checks with the
* status bits of all the other GPEs unset.
* If the status bit is set for any enabled GPE other than the
* EC one, the wakeup is regarded as a genuine one.
*/
if (acpi_any_gpe_status_set() && !acpi_ec_dispatch_gpe())
if (acpi_ec_other_gpes_active())
return true;
/*
* If the EC GPE status bit has not been set, the wakeup is
* regarded as a spurious one.
*/
if (!acpi_ec_dispatch_gpe())
return false;
/*
* Cancel the wakeup and process all pending events in case
* there are any wakeup ones in there.

View File

@ -763,6 +763,7 @@ static int sata_pmp_eh_recover_pmp(struct ata_port *ap,
if (dev->flags & ATA_DFLAG_DETACH) {
detach = 1;
rc = -ENODEV;
goto fail;
}

View File

@ -4553,22 +4553,19 @@ int ata_scsi_add_hosts(struct ata_host *host, struct scsi_host_template *sht)
*/
shost->max_host_blocked = 1;
rc = scsi_add_host_with_dma(ap->scsi_host,
&ap->tdev, ap->host->dev);
rc = scsi_add_host_with_dma(shost, &ap->tdev, ap->host->dev);
if (rc)
goto err_add;
goto err_alloc;
}
return 0;
err_add:
scsi_host_put(host->ports[i]->scsi_host);
err_alloc:
while (--i >= 0) {
struct Scsi_Host *shost = host->ports[i]->scsi_host;
/* scsi_host_put() is in ata_devres_release() */
scsi_remove_host(shost);
scsi_host_put(shost);
}
return rc;
}

View File

@ -525,7 +525,7 @@ static int fw_load_sysfs_fallback(struct fw_sysfs *fw_sysfs,
}
retval = fw_sysfs_wait_timeout(fw_priv, timeout);
if (retval < 0) {
if (retval < 0 && retval != -ENOENT) {
mutex_lock(&fw_lock);
fw_load_abort(fw_sysfs);
mutex_unlock(&fw_lock);

View File

@ -2631,7 +2631,7 @@ static int genpd_iterate_idle_states(struct device_node *dn,
ret = of_count_phandle_with_args(dn, "domain-idle-states", NULL);
if (ret <= 0)
return ret;
return ret == -ENOENT ? 0 : ret;
/* Loop over the phandles until all the requested entry is found */
of_for_each_phandle(&it, ret, dn, "domain-idle-states", NULL, 0) {

View File

@ -241,7 +241,9 @@ void wakeup_source_unregister(struct wakeup_source *ws)
{
if (ws) {
wakeup_source_remove(ws);
wakeup_source_sysfs_remove(ws);
if (ws->dev)
wakeup_source_sysfs_remove(ws);
wakeup_source_destroy(ws);
}
}

View File

@ -579,6 +579,7 @@ static struct nullb_cmd *__alloc_cmd(struct nullb_queue *nq)
if (tag != -1U) {
cmd = &nq->cmds[tag];
cmd->tag = tag;
cmd->error = BLK_STS_OK;
cmd->nq = nq;
if (nq->dev->irqmode == NULL_IRQ_TIMER) {
hrtimer_init(&cmd->timer, CLOCK_MONOTONIC,
@ -1335,6 +1336,7 @@ static blk_status_t null_queue_rq(struct blk_mq_hw_ctx *hctx,
cmd->timer.function = null_cmd_timer_expired;
}
cmd->rq = bd->rq;
cmd->error = BLK_STS_OK;
cmd->nq = nq;
blk_mq_start_request(bd->rq);
@ -1382,7 +1384,12 @@ static void cleanup_queues(struct nullb *nullb)
static void null_del_dev(struct nullb *nullb)
{
struct nullb_device *dev = nullb->dev;
struct nullb_device *dev;
if (!nullb)
return;
dev = nullb->dev;
ida_simple_remove(&nullb_indexes, nullb->index);
@ -1736,6 +1743,7 @@ out_cleanup_queues:
cleanup_queues(nullb);
out_free_nullb:
kfree(nullb);
dev->nullb = NULL;
out:
return rv;
}

View File

@ -47,6 +47,7 @@
#include <linux/bitmap.h>
#include <linux/list.h>
#include <linux/workqueue.h>
#include <linux/sched/mm.h>
#include <xen/xen.h>
#include <xen/xenbus.h>
@ -2188,10 +2189,12 @@ static void blkfront_setup_discard(struct blkfront_info *info)
static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo)
{
unsigned int psegs, grants;
unsigned int psegs, grants, memflags;
int err, i;
struct blkfront_info *info = rinfo->dev_info;
memflags = memalloc_noio_save();
if (info->max_indirect_segments == 0) {
if (!HAS_EXTRA_REQ)
grants = BLKIF_MAX_SEGMENTS_PER_REQUEST;
@ -2223,7 +2226,7 @@ static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo)
BUG_ON(!list_empty(&rinfo->indirect_pages));
for (i = 0; i < num; i++) {
struct page *indirect_page = alloc_page(GFP_NOIO);
struct page *indirect_page = alloc_page(GFP_KERNEL);
if (!indirect_page)
goto out_of_memory;
list_add(&indirect_page->lru, &rinfo->indirect_pages);
@ -2234,15 +2237,15 @@ static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo)
rinfo->shadow[i].grants_used =
kvcalloc(grants,
sizeof(rinfo->shadow[i].grants_used[0]),
GFP_NOIO);
GFP_KERNEL);
rinfo->shadow[i].sg = kvcalloc(psegs,
sizeof(rinfo->shadow[i].sg[0]),
GFP_NOIO);
GFP_KERNEL);
if (info->max_indirect_segments)
rinfo->shadow[i].indirect_grants =
kvcalloc(INDIRECT_GREFS(grants),
sizeof(rinfo->shadow[i].indirect_grants[0]),
GFP_NOIO);
GFP_KERNEL);
if ((rinfo->shadow[i].grants_used == NULL) ||
(rinfo->shadow[i].sg == NULL) ||
(info->max_indirect_segments &&
@ -2251,6 +2254,7 @@ static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo)
sg_init_table(rinfo->shadow[i].sg, psegs);
}
memalloc_noio_restore(memflags);
return 0;
@ -2270,6 +2274,9 @@ out_of_memory:
__free_page(indirect_page);
}
}
memalloc_noio_restore(memflags);
return -ENOMEM;
}

View File

@ -345,7 +345,7 @@ static int sunxi_rsb_read(struct sunxi_rsb *rsb, u8 rtaddr, u8 addr,
if (ret)
goto unlock;
*buf = readl(rsb->regs + RSB_DATA);
*buf = readl(rsb->regs + RSB_DATA) & GENMASK(len * 8 - 1, 0);
unlock:
mutex_unlock(&rsb->lock);

View File

@ -3207,8 +3207,8 @@ static void __get_guid(struct ipmi_smi *intf)
if (rv)
/* Send failed, no GUID available. */
bmc->dyn_guid_set = 0;
wait_event(intf->waitq, bmc->dyn_guid_set != 2);
else
wait_event(intf->waitq, bmc->dyn_guid_set != 2);
/* dyn_guid_set makes the guid data available. */
smp_rmb();

View File

@ -99,11 +99,8 @@ static int tpm_read_log(struct tpm_chip *chip)
*
* If an event log is found then the securityfs files are setup to
* export it to userspace, otherwise nothing is done.
*
* Returns -ENODEV if the firmware has no event log or securityfs is not
* supported.
*/
int tpm_bios_log_setup(struct tpm_chip *chip)
void tpm_bios_log_setup(struct tpm_chip *chip)
{
const char *name = dev_name(&chip->dev);
unsigned int cnt;
@ -112,7 +109,7 @@ int tpm_bios_log_setup(struct tpm_chip *chip)
rc = tpm_read_log(chip);
if (rc < 0)
return rc;
return;
log_version = rc;
cnt = 0;
@ -158,13 +155,12 @@ int tpm_bios_log_setup(struct tpm_chip *chip)
cnt++;
}
return 0;
return;
err:
rc = PTR_ERR(chip->bios_dir[cnt]);
chip->bios_dir[cnt] = NULL;
tpm_bios_log_teardown(chip);
return rc;
return;
}
void tpm_bios_log_teardown(struct tpm_chip *chip)

View File

@ -115,6 +115,7 @@ static void *tpm1_bios_measurements_next(struct seq_file *m, void *v,
u32 converted_event_size;
u32 converted_event_type;
(*pos)++;
converted_event_size = do_endian_conversion(event->event_size);
v += sizeof(struct tcpa_event) + converted_event_size;
@ -132,7 +133,6 @@ static void *tpm1_bios_measurements_next(struct seq_file *m, void *v,
((v + sizeof(struct tcpa_event) + converted_event_size) > limit))
return NULL;
(*pos)++;
return v;
}

View File

@ -94,6 +94,7 @@ static void *tpm2_bios_measurements_next(struct seq_file *m, void *v,
size_t event_size;
void *marker;
(*pos)++;
event_header = log->bios_event_log;
if (v == SEQ_START_TOKEN) {
@ -118,7 +119,6 @@ static void *tpm2_bios_measurements_next(struct seq_file *m, void *v,
if (((v + event_size) >= limit) || (event_size == 0))
return NULL;
(*pos)++;
return v;
}

View File

@ -596,9 +596,7 @@ int tpm_chip_register(struct tpm_chip *chip)
tpm_sysfs_add_device(chip);
rc = tpm_bios_log_setup(chip);
if (rc != 0 && rc != -ENODEV)
return rc;
tpm_bios_log_setup(chip);
tpm_add_ppi(chip);

View File

@ -464,7 +464,7 @@ int tpm2_prepare_space(struct tpm_chip *chip, struct tpm_space *space, u8 *cmd,
int tpm2_commit_space(struct tpm_chip *chip, struct tpm_space *space, void *buf,
size_t *bufsiz);
int tpm_bios_log_setup(struct tpm_chip *chip);
void tpm_bios_log_setup(struct tpm_chip *chip);
void tpm_bios_log_teardown(struct tpm_chip *chip);
int tpm_dev_common_init(void);
void tpm_dev_common_exit(void);

View File

@ -432,8 +432,10 @@ static void __init jz4770_cgu_init(struct device_node *np)
cgu = ingenic_cgu_new(jz4770_cgu_clocks,
ARRAY_SIZE(jz4770_cgu_clocks), np);
if (!cgu)
if (!cgu) {
pr_err("%s: failed to initialise CGU\n", __func__);
return;
}
retval = ingenic_cgu_register_clocks(cgu);
if (retval)

View File

@ -189,7 +189,7 @@ static long ingenic_tcu_round_rate(struct clk_hw *hw, unsigned long req_rate,
u8 prescale;
if (req_rate > rate)
return -EINVAL;
return rate;
prescale = ingenic_tcu_get_prescale(rate, req_rate);

View File

@ -344,6 +344,9 @@ static int imx6ul_opp_check_speed_grading(struct device *dev)
void __iomem *base;
np = of_find_compatible_node(NULL, NULL, "fsl,imx6ul-ocotp");
if (!np)
np = of_find_compatible_node(NULL, NULL,
"fsl,imx6ull-ocotp");
if (!np)
return -ENOENT;
@ -484,6 +487,9 @@ static int imx6q_cpufreq_probe(struct platform_device *pdev)
goto put_reg;
}
/* Because we have added the OPPs here, we must free them */
free_opp = true;
if (of_machine_is_compatible("fsl,imx6ul") ||
of_machine_is_compatible("fsl,imx6ull")) {
ret = imx6ul_opp_check_speed_grading(cpu_dev);
@ -501,8 +507,6 @@ static int imx6q_cpufreq_probe(struct platform_device *pdev)
imx6q_opp_check_speed_grading(cpu_dev);
}
/* Because we have added the OPPs here, we must free them */
free_opp = true;
num = dev_pm_opp_get_opp_count(cpu_dev);
if (num < 0) {
ret = num;

View File

@ -1080,6 +1080,12 @@ free_and_return:
static inline void clean_chip_info(void)
{
int i;
/* flush any pending work items */
if (chips)
for (i = 0; i < nr_chips; i++)
cancel_work_sync(&chips[i].throttle);
kfree(chips);
}

View File

@ -1950,7 +1950,13 @@ EXPORT_SYMBOL(cnstr_shdsc_skcipher_decap);
*/
void cnstr_shdsc_xts_skcipher_encap(u32 * const desc, struct alginfo *cdata)
{
__be64 sector_size = cpu_to_be64(512);
/*
* Set sector size to a big value, practically disabling
* sector size segmentation in xts implementation. We cannot
* take full advantage of this HW feature with existing
* crypto API / dm-crypt SW architecture.
*/
__be64 sector_size = cpu_to_be64(BIT(15));
u32 *key_jump_cmd;
init_sh_desc(desc, HDR_SHARE_SERIAL | HDR_SAVECTX);
@ -2003,7 +2009,13 @@ EXPORT_SYMBOL(cnstr_shdsc_xts_skcipher_encap);
*/
void cnstr_shdsc_xts_skcipher_decap(u32 * const desc, struct alginfo *cdata)
{
__be64 sector_size = cpu_to_be64(512);
/*
* Set sector size to a big value, practically disabling
* sector size segmentation in xts implementation. We cannot
* take full advantage of this HW feature with existing
* crypto API / dm-crypt SW architecture.
*/
__be64 sector_size = cpu_to_be64(BIT(15));
u32 *key_jump_cmd;
init_sh_desc(desc, HDR_SHARE_SERIAL | HDR_SAVECTX);

View File

@ -87,6 +87,8 @@ static unsigned int cc_get_sgl_nents(struct device *dev,
{
unsigned int nents = 0;
*lbytes = 0;
while (nbytes && sg_list) {
nents++;
/* get the number of bytes in the last entry */
@ -95,6 +97,7 @@ static unsigned int cc_get_sgl_nents(struct device *dev,
nbytes : sg_list->length;
sg_list = sg_next(sg_list);
}
dev_dbg(dev, "nents %d last bytes %d\n", nents, *lbytes);
return nents;
}
@ -290,37 +293,25 @@ static int cc_map_sg(struct device *dev, struct scatterlist *sg,
unsigned int nbytes, int direction, u32 *nents,
u32 max_sg_nents, u32 *lbytes, u32 *mapped_nents)
{
if (sg_is_last(sg)) {
/* One entry only case -set to DLLI */
if (dma_map_sg(dev, sg, 1, direction) != 1) {
dev_err(dev, "dma_map_sg() single buffer failed\n");
return -ENOMEM;
}
dev_dbg(dev, "Mapped sg: dma_address=%pad page=%p addr=%pK offset=%u length=%u\n",
&sg_dma_address(sg), sg_page(sg), sg_virt(sg),
sg->offset, sg->length);
*lbytes = nbytes;
*nents = 1;
*mapped_nents = 1;
} else { /*sg_is_last*/
*nents = cc_get_sgl_nents(dev, sg, nbytes, lbytes);
if (*nents > max_sg_nents) {
*nents = 0;
dev_err(dev, "Too many fragments. current %d max %d\n",
*nents, max_sg_nents);
return -ENOMEM;
}
/* In case of mmu the number of mapped nents might
* be changed from the original sgl nents
*/
*mapped_nents = dma_map_sg(dev, sg, *nents, direction);
if (*mapped_nents == 0) {
*nents = 0;
dev_err(dev, "dma_map_sg() sg buffer failed\n");
return -ENOMEM;
}
int ret = 0;
*nents = cc_get_sgl_nents(dev, sg, nbytes, lbytes);
if (*nents > max_sg_nents) {
*nents = 0;
dev_err(dev, "Too many fragments. current %d max %d\n",
*nents, max_sg_nents);
return -ENOMEM;
}
ret = dma_map_sg(dev, sg, *nents, direction);
if (dma_mapping_error(dev, ret)) {
*nents = 0;
dev_err(dev, "dma_map_sg() sg buffer failed %d\n", ret);
return -ENOMEM;
}
*mapped_nents = ret;
return 0;
}
@ -555,11 +546,12 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req)
sg_virt(req->src), areq_ctx->src.nents, areq_ctx->assoc.nents,
areq_ctx->assoclen, req->cryptlen);
dma_unmap_sg(dev, req->src, sg_nents(req->src), DMA_BIDIRECTIONAL);
dma_unmap_sg(dev, req->src, areq_ctx->src.mapped_nents,
DMA_BIDIRECTIONAL);
if (req->src != req->dst) {
dev_dbg(dev, "Unmapping dst sgl: req->dst=%pK\n",
sg_virt(req->dst));
dma_unmap_sg(dev, req->dst, sg_nents(req->dst),
dma_unmap_sg(dev, req->dst, areq_ctx->dst.mapped_nents,
DMA_BIDIRECTIONAL);
}
if (drvdata->coherent &&
@ -881,7 +873,7 @@ static int cc_aead_chain_data(struct cc_drvdata *drvdata,
&src_last_bytes);
sg_index = areq_ctx->src_sgl->length;
//check where the data starts
while (sg_index <= size_to_skip) {
while (src_mapped_nents && (sg_index <= size_to_skip)) {
src_mapped_nents--;
offset -= areq_ctx->src_sgl->length;
sgl = sg_next(areq_ctx->src_sgl);
@ -902,13 +894,17 @@ static int cc_aead_chain_data(struct cc_drvdata *drvdata,
if (req->src != req->dst) {
size_for_map = areq_ctx->assoclen + req->cryptlen;
size_for_map += (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) ?
authsize : 0;
if (direct == DRV_CRYPTO_DIRECTION_ENCRYPT)
size_for_map += authsize;
else
size_for_map -= authsize;
if (is_gcm4543)
size_for_map += crypto_aead_ivsize(tfm);
rc = cc_map_sg(dev, req->dst, size_for_map, DMA_BIDIRECTIONAL,
&areq_ctx->dst.nents,
&areq_ctx->dst.mapped_nents,
LLI_MAX_NUM_OF_DATA_ENTRIES, &dst_last_bytes,
&dst_mapped_nents);
if (rc)
@ -921,7 +917,7 @@ static int cc_aead_chain_data(struct cc_drvdata *drvdata,
offset = size_to_skip;
//check where the data starts
while (sg_index <= size_to_skip) {
while (dst_mapped_nents && sg_index <= size_to_skip) {
dst_mapped_nents--;
offset -= areq_ctx->dst_sgl->length;
sgl = sg_next(areq_ctx->dst_sgl);
@ -1117,13 +1113,15 @@ int cc_map_aead_request(struct cc_drvdata *drvdata, struct aead_request *req)
}
size_to_map = req->cryptlen + areq_ctx->assoclen;
if (areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_ENCRYPT)
/* If we do in-place encryption, we also need the auth tag */
if ((areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_ENCRYPT) &&
(req->src == req->dst)) {
size_to_map += authsize;
}
if (is_gcm4543)
size_to_map += crypto_aead_ivsize(tfm);
rc = cc_map_sg(dev, req->src, size_to_map, DMA_BIDIRECTIONAL,
&areq_ctx->src.nents,
&areq_ctx->src.mapped_nents,
(LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES +
LLI_MAX_NUM_OF_DATA_ENTRIES),
&dummy, &mapped_nents);

View File

@ -25,6 +25,7 @@ enum cc_sg_cpy_direct {
struct cc_mlli {
cc_sram_addr_t sram_addr;
unsigned int mapped_nents;
unsigned int nents; //sg nents
unsigned int mlli_nents; //mlli nents might be different than the above
};

View File

@ -491,11 +491,6 @@ static int _sdei_event_unregister(struct sdei_event *event)
{
lockdep_assert_held(&sdei_events_lock);
spin_lock(&sdei_list_lock);
event->reregister = false;
event->reenable = false;
spin_unlock(&sdei_list_lock);
if (event->type == SDEI_EVENT_TYPE_SHARED)
return sdei_api_event_unregister(event->event_num);
@ -518,6 +513,11 @@ int sdei_event_unregister(u32 event_num)
break;
}
spin_lock(&sdei_list_lock);
event->reregister = false;
event->reenable = false;
spin_unlock(&sdei_list_lock);
err = _sdei_event_unregister(event);
if (err)
break;
@ -585,26 +585,15 @@ static int _sdei_event_register(struct sdei_event *event)
lockdep_assert_held(&sdei_events_lock);
spin_lock(&sdei_list_lock);
event->reregister = true;
spin_unlock(&sdei_list_lock);
if (event->type == SDEI_EVENT_TYPE_SHARED)
return sdei_api_event_register(event->event_num,
sdei_entry_point,
event->registered,
SDEI_EVENT_REGISTER_RM_ANY, 0);
err = sdei_do_cross_call(_local_event_register, event);
if (err) {
spin_lock(&sdei_list_lock);
event->reregister = false;
event->reenable = false;
spin_unlock(&sdei_list_lock);
if (err)
sdei_do_cross_call(_local_event_unregister, event);
}
return err;
}
@ -632,8 +621,17 @@ int sdei_event_register(u32 event_num, sdei_event_callback *cb, void *arg)
break;
}
spin_lock(&sdei_list_lock);
event->reregister = true;
spin_unlock(&sdei_list_lock);
err = _sdei_event_register(event);
if (err) {
spin_lock(&sdei_list_lock);
event->reregister = false;
event->reenable = false;
spin_unlock(&sdei_list_lock);
sdei_event_destroy(event);
pr_warn("Failed to register event %u: %d\n", event_num,
err);

View File

@ -562,7 +562,7 @@ int __init efi_config_parse_tables(void *config_tables, int count, int sz,
}
}
if (efi_enabled(EFI_MEMMAP))
if (!IS_ENABLED(CONFIG_X86_32) && efi_enabled(EFI_MEMMAP))
efi_memattr_init();
efi_tpm_eventlog_init();

View File

@ -2176,8 +2176,6 @@ static int amdgpu_device_ip_suspend_phase1(struct amdgpu_device *adev)
{
int i, r;
amdgpu_device_set_pg_state(adev, AMD_PG_STATE_UNGATE);
amdgpu_device_set_cg_state(adev, AMD_CG_STATE_UNGATE);
for (i = adev->num_ip_blocks - 1; i >= 0; i--) {
if (!adev->ip_blocks[i].status.valid)
@ -3070,6 +3068,9 @@ int amdgpu_device_suspend(struct drm_device *dev, bool suspend, bool fbcon)
}
}
amdgpu_device_set_pg_state(adev, AMD_PG_STATE_UNGATE);
amdgpu_device_set_cg_state(adev, AMD_CG_STATE_UNGATE);
amdgpu_amdkfd_suspend(adev);
amdgpu_ras_suspend(adev);

View File

@ -1026,6 +1026,8 @@ static void gfx_v9_0_check_fw_write_wait(struct amdgpu_device *adev)
adev->gfx.mec_fw_write_wait = true;
break;
default:
adev->gfx.me_fw_write_wait = true;
adev->gfx.mec_fw_write_wait = true;
break;
}
}

View File

@ -184,6 +184,7 @@ static int renoir_print_clk_levels(struct smu_context *smu,
uint32_t cur_value = 0, value = 0, count = 0, min = 0, max = 0;
DpmClocks_t *clk_table = smu->smu_table.clocks_table;
SmuMetrics_t metrics;
bool cur_value_match_level = false;
if (!clk_table || clk_type >= SMU_CLK_COUNT)
return -EINVAL;
@ -243,8 +244,13 @@ static int renoir_print_clk_levels(struct smu_context *smu,
GET_DPM_CUR_FREQ(clk_table, clk_type, i, value);
size += sprintf(buf + size, "%d: %uMhz %s\n", i, value,
cur_value == value ? "*" : "");
if (cur_value == value)
cur_value_match_level = true;
}
if (!cur_value_match_level)
size += sprintf(buf + size, " %uMhz *\n", cur_value);
return size;
}

View File

@ -37,7 +37,7 @@ extern void renoir_set_ppt_funcs(struct smu_context *smu);
freq = table->SocClocks[dpm_level].Freq; \
break; \
case SMU_MCLK: \
freq = table->MemClocks[dpm_level].Freq; \
freq = table->FClocks[dpm_level].Freq; \
break; \
case SMU_DCEFCLK: \
freq = table->DcfClocks[dpm_level].Freq; \

View File

@ -2694,9 +2694,9 @@ static bool drm_dp_get_vc_payload_bw(int dp_link_bw,
int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool mst_state)
{
int ret = 0;
int i = 0;
struct drm_dp_mst_branch *mstb = NULL;
mutex_lock(&mgr->payload_lock);
mutex_lock(&mgr->lock);
if (mst_state == mgr->mst_state)
goto out_unlock;
@ -2755,25 +2755,18 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms
/* this can fail if the device is gone */
drm_dp_dpcd_writeb(mgr->aux, DP_MSTM_CTRL, 0);
ret = 0;
mutex_lock(&mgr->payload_lock);
memset(mgr->payloads, 0, mgr->max_payloads * sizeof(struct drm_dp_payload));
memset(mgr->payloads, 0,
mgr->max_payloads * sizeof(mgr->payloads[0]));
memset(mgr->proposed_vcpis, 0,
mgr->max_payloads * sizeof(mgr->proposed_vcpis[0]));
mgr->payload_mask = 0;
set_bit(0, &mgr->payload_mask);
for (i = 0; i < mgr->max_payloads; i++) {
struct drm_dp_vcpi *vcpi = mgr->proposed_vcpis[i];
if (vcpi) {
vcpi->vcpi = 0;
vcpi->num_slots = 0;
}
mgr->proposed_vcpis[i] = NULL;
}
mgr->vcpi_mask = 0;
mutex_unlock(&mgr->payload_lock);
}
out_unlock:
mutex_unlock(&mgr->lock);
mutex_unlock(&mgr->payload_lock);
if (mstb)
drm_dp_mst_topology_put_mstb(mstb);
return ret;

View File

@ -51,8 +51,6 @@
drm_dma_handle_t *drm_pci_alloc(struct drm_device * dev, size_t size, size_t align)
{
drm_dma_handle_t *dmah;
unsigned long addr;
size_t sz;
/* pci_alloc_consistent only guarantees alignment to the smallest
* PAGE_SIZE order which is greater than or equal to the requested size.
@ -68,20 +66,13 @@ drm_dma_handle_t *drm_pci_alloc(struct drm_device * dev, size_t size, size_t ali
dmah->size = size;
dmah->vaddr = dma_alloc_coherent(&dev->pdev->dev, size,
&dmah->busaddr,
GFP_KERNEL | __GFP_COMP);
GFP_KERNEL);
if (dmah->vaddr == NULL) {
kfree(dmah);
return NULL;
}
/* XXX - Is virt_to_page() legal for consistent mem? */
/* Reserve */
for (addr = (unsigned long)dmah->vaddr, sz = size;
sz > 0; addr += PAGE_SIZE, sz -= PAGE_SIZE) {
SetPageReserved(virt_to_page((void *)addr));
}
return dmah;
}
@ -94,19 +85,9 @@ EXPORT_SYMBOL(drm_pci_alloc);
*/
void __drm_legacy_pci_free(struct drm_device * dev, drm_dma_handle_t * dmah)
{
unsigned long addr;
size_t sz;
if (dmah->vaddr) {
/* XXX - Is virt_to_page() legal for consistent mem? */
/* Unreserve */
for (addr = (unsigned long)dmah->vaddr, sz = dmah->size;
sz > 0; addr += PAGE_SIZE, sz -= PAGE_SIZE) {
ClearPageReserved(virt_to_page((void *)addr));
}
if (dmah->vaddr)
dma_free_coherent(&dev->pdev->dev, dmah->size, dmah->vaddr,
dmah->busaddr);
}
}
/**

View File

@ -32,6 +32,7 @@ struct etnaviv_pm_domain {
};
struct etnaviv_pm_domain_meta {
unsigned int feature;
const struct etnaviv_pm_domain *domains;
u32 nr_domains;
};
@ -410,36 +411,78 @@ static const struct etnaviv_pm_domain doms_vg[] = {
static const struct etnaviv_pm_domain_meta doms_meta[] = {
{
.feature = chipFeatures_PIPE_3D,
.nr_domains = ARRAY_SIZE(doms_3d),
.domains = &doms_3d[0]
},
{
.feature = chipFeatures_PIPE_2D,
.nr_domains = ARRAY_SIZE(doms_2d),
.domains = &doms_2d[0]
},
{
.feature = chipFeatures_PIPE_VG,
.nr_domains = ARRAY_SIZE(doms_vg),
.domains = &doms_vg[0]
}
};
static unsigned int num_pm_domains(const struct etnaviv_gpu *gpu)
{
unsigned int num = 0, i;
for (i = 0; i < ARRAY_SIZE(doms_meta); i++) {
const struct etnaviv_pm_domain_meta *meta = &doms_meta[i];
if (gpu->identity.features & meta->feature)
num += meta->nr_domains;
}
return num;
}
static const struct etnaviv_pm_domain *pm_domain(const struct etnaviv_gpu *gpu,
unsigned int index)
{
const struct etnaviv_pm_domain *domain = NULL;
unsigned int offset = 0, i;
for (i = 0; i < ARRAY_SIZE(doms_meta); i++) {
const struct etnaviv_pm_domain_meta *meta = &doms_meta[i];
if (!(gpu->identity.features & meta->feature))
continue;
if (meta->nr_domains < (index - offset)) {
offset += meta->nr_domains;
continue;
}
domain = meta->domains + (index - offset);
}
return domain;
}
int etnaviv_pm_query_dom(struct etnaviv_gpu *gpu,
struct drm_etnaviv_pm_domain *domain)
{
const struct etnaviv_pm_domain_meta *meta = &doms_meta[domain->pipe];
const unsigned int nr_domains = num_pm_domains(gpu);
const struct etnaviv_pm_domain *dom;
if (domain->iter >= meta->nr_domains)
if (domain->iter >= nr_domains)
return -EINVAL;
dom = meta->domains + domain->iter;
dom = pm_domain(gpu, domain->iter);
if (!dom)
return -EINVAL;
domain->id = domain->iter;
domain->nr_signals = dom->nr_signals;
strncpy(domain->name, dom->name, sizeof(domain->name));
domain->iter++;
if (domain->iter == meta->nr_domains)
if (domain->iter == nr_domains)
domain->iter = 0xff;
return 0;
@ -448,14 +491,16 @@ int etnaviv_pm_query_dom(struct etnaviv_gpu *gpu,
int etnaviv_pm_query_sig(struct etnaviv_gpu *gpu,
struct drm_etnaviv_pm_signal *signal)
{
const struct etnaviv_pm_domain_meta *meta = &doms_meta[signal->pipe];
const unsigned int nr_domains = num_pm_domains(gpu);
const struct etnaviv_pm_domain *dom;
const struct etnaviv_pm_signal *sig;
if (signal->domain >= meta->nr_domains)
if (signal->domain >= nr_domains)
return -EINVAL;
dom = meta->domains + signal->domain;
dom = pm_domain(gpu, signal->domain);
if (!dom)
return -EINVAL;
if (signal->iter >= dom->nr_signals)
return -EINVAL;

View File

@ -2124,7 +2124,11 @@ static void intel_ddi_get_power_domains(struct intel_encoder *encoder,
return;
dig_port = enc_to_dig_port(&encoder->base);
intel_display_power_get(dev_priv, dig_port->ddi_io_power_domain);
if (!intel_phy_is_tc(dev_priv, phy) ||
dig_port->tc_mode != TC_PORT_TBT_ALT)
intel_display_power_get(dev_priv,
dig_port->ddi_io_power_domain);
/*
* AUX power is only needed for (e)DP mode, and for HDMI mode on TC

View File

@ -931,11 +931,13 @@ static inline struct i915_ggtt *cache_to_ggtt(struct reloc_cache *cache)
static void reloc_gpu_flush(struct reloc_cache *cache)
{
GEM_BUG_ON(cache->rq_size >= cache->rq->batch->obj->base.size / sizeof(u32));
struct drm_i915_gem_object *obj = cache->rq->batch->obj;
GEM_BUG_ON(cache->rq_size >= obj->base.size / sizeof(u32));
cache->rq_cmd[cache->rq_size] = MI_BATCH_BUFFER_END;
__i915_gem_object_flush_map(cache->rq->batch->obj, 0, cache->rq_size);
i915_gem_object_unpin_map(cache->rq->batch->obj);
__i915_gem_object_flush_map(obj, 0, sizeof(u32) * (cache->rq_size + 1));
i915_gem_object_unpin_map(obj);
intel_gt_chipset_flush(cache->rq->engine->gt);

Some files were not shown because too many files have changed in this diff Show More