s390 updates for the 5.4 merge window

- Add support for IBM z15 machines.
 
 - Add SHA3 and CCA AES cipher key support in zcrypt and pkey refactoring.
 
 - Move to arch_stack_walk infrastructure for the stack unwinder.
 
 - Various kasan fixes and improvements.
 
 - Various command line parsing fixes.
 
 - Improve decompressor phase debuggability.
 
 - Lift no bss usage restriction for the early code.
 
 - Use refcount_t for reference counters for couple of places in
   mm code.
 
 - Logging improvements and return code fix in vfio-ccw code.
 
 - Couple of zpci fixes and minor refactoring.
 
 - Remove some outdated documentation.
 
 - Fix secure boot detection.
 
 - Other various minor code clean ups.
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCAAdFiEE3QHqV+H2a8xAv27vjYWKoQLXFBgFAl1/pRoACgkQjYWKoQLX
 FBjxLQf/Y1nlmoc8URLqaqfNTczIvUzdfXuahI7L75RoIIiqHtcHBrVwauSr7Lma
 XVRzK/+6q0UPISrOIZEEtQKsMMM7rGuUv/+XTyrOB/Tsc31kN2EIRXltfXI/lkb8
 BZdgch4Xs2rOD7y6TvqpYJsXYXsnLMWwCk8V+48V/pok4sEgMDgh0bTQRHPHYmZ6
 1cv8ZQ0AeuVxC6ChM30LhajGRPkYd8RQ82K7fU7jxT0Tjzu66SyrW3pTwA5empBD
 RI2yBZJ8EXwJyTCpvN8NKiBgihDs9oUZl61Dyq3j64Mb1OuNUhxXA/8jmtnGn0ok
 O9vtImCWzExhjSMkvotuhHEC05nEEQ==
 =LCgE
 -----END PGP SIGNATURE-----

Merge tag 's390-5.4-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux

Pull s390 updates from Vasily Gorbik:

 - Add support for IBM z15 machines.

 - Add SHA3 and CCA AES cipher key support in zcrypt and pkey
   refactoring.

 - Move to arch_stack_walk infrastructure for the stack unwinder.

 - Various kasan fixes and improvements.

 - Various command line parsing fixes.

 - Improve decompressor phase debuggability.

 - Lift no bss usage restriction for the early code.

 - Use refcount_t for reference counters for couple of places in mm
   code.

 - Logging improvements and return code fix in vfio-ccw code.

 - Couple of zpci fixes and minor refactoring.

 - Remove some outdated documentation.

 - Fix secure boot detection.

 - Other various minor code clean ups.

* tag 's390-5.4-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (48 commits)
  s390: remove pointless drivers-y in drivers/s390/Makefile
  s390/cpum_sf: Fix line length and format string
  s390/pci: fix MSI message data
  s390: add support for IBM z15 machines
  s390/crypto: Support for SHA3 via CPACF (MSA6)
  s390/startup: add pgm check info printing
  s390/crypto: xts-aes-s390 fix extra run-time crypto self tests finding
  vfio-ccw: fix error return code in vfio_ccw_sch_init()
  s390: vfio-ap: fix warning reset not completed
  s390/base: remove unused s390_base_mcck_handler
  s390/sclp: Fix bit checked for has_sipl
  s390/zcrypt: fix wrong handling of cca cipher keygenflags
  s390/kasan: add kdump support
  s390/setup: avoid using strncmp with hardcoded length
  s390/sclp: avoid using strncmp with hardcoded length
  s390/module: avoid using strncmp with hardcoded length
  s390/pci: avoid using strncmp with hardcoded length
  s390/kaslr: reserve memory for kasan usage
  s390/mem_detect: provide single get_mem_detect_end
  s390/cmma: reuse kstrtobool for option value parsing
  ...
This commit is contained in:
Linus Torvalds 2019-09-17 14:04:43 -07:00
commit d590284419
73 changed files with 4053 additions and 4129 deletions

View File

@ -1,84 +0,0 @@
==================
DASD device driver
==================
S/390's disk devices (DASDs) are managed by Linux via the DASD device
driver. It is valid for all types of DASDs and represents them to
Linux as block devices, namely "dd". Currently the DASD driver uses a
single major number (254) and 4 minor numbers per volume (1 for the
physical volume and 3 for partitions). With respect to partitions see
below. Thus you may have up to 64 DASD devices in your system.
The kernel parameter 'dasd=from-to,...' may be issued arbitrary times
in the kernel's parameter line or not at all. The 'from' and 'to'
parameters are to be given in hexadecimal notation without a leading
0x.
If you supply kernel parameters the different instances are processed
in order of appearance and a minor number is reserved for any device
covered by the supplied range up to 64 volumes. Additional DASDs are
ignored. If you do not supply the 'dasd=' kernel parameter at all, the
DASD driver registers all supported DASDs of your system to a minor
number in ascending order of the subchannel number.
The driver currently supports ECKD-devices and there are stubs for
support of the FBA and CKD architectures. For the FBA architecture
only some smart data structures are missing to make the support
complete.
We performed our testing on 3380 and 3390 type disks of different
sizes, under VM and on the bare hardware (LPAR), using internal disks
of the multiprise as well as a RAMAC virtual array. Disks exported by
an Enterprise Storage Server (Seascape) should work fine as well.
We currently implement one partition per volume, which is the whole
volume, skipping the first blocks up to the volume label. These are
reserved for IPL records and IBM's volume label to assure
accessibility of the DASD from other OSs. In a later stage we will
provide support of partitions, maybe VTOC oriented or using a kind of
partition table in the label record.
Usage
=====
-Low-level format (?CKD only)
For using an ECKD-DASD as a Linux harddisk you have to low-level
format the tracks by issuing the BLKDASDFORMAT-ioctl on that
device. This will erase any data on that volume including IBM volume
labels, VTOCs etc. The ioctl may take a `struct format_data *` or
'NULL' as an argument::
typedef struct {
int start_unit;
int stop_unit;
int blksize;
} format_data_t;
When a NULL argument is passed to the BLKDASDFORMAT ioctl the whole
disk is formatted to a blocksize of 1024 bytes. Otherwise start_unit
and stop_unit are the first and last track to be formatted. If
stop_unit is -1 it implies that the DASD is formatted from start_unit
up to the last track. blksize can be any power of two between 512 and
4096. We recommend no blksize lower than 1024 because the ext2fs uses
1kB blocks anyway and you gain approx. 50% of capacity increasing your
blksize from 512 byte to 1kB.
Make a filesystem
=================
Then you can mk??fs the filesystem of your choice on that volume or
partition. For reasons of sanity you should build your filesystem on
the partition /dev/dd?1 instead of the whole volume. You only lose 3kB
but may be sure that you can reuse your data after introduction of a
real partition table.
Bugs
====
- Performance sometimes is rather low because we don't fully exploit clustering
TODO-List
=========
- Add IBM'S Disk layout to genhd
- Enhance driver to use more than one major number
- Enable usage as a module
- Support Cache fast write and DASD fast write (ECKD)

File diff suppressed because it is too large Load Diff

View File

@ -7,7 +7,6 @@ s390 Architecture
cds
3270
debugging390
driver-model
monreader
qeth
@ -15,7 +14,6 @@ s390 Architecture
vfio-ap
vfio-ccw
zfcpdump
dasd
common_io
text_files

View File

@ -105,6 +105,7 @@ config S390
select ARCH_INLINE_WRITE_UNLOCK_IRQRESTORE
select ARCH_KEEP_MEMBLOCK
select ARCH_SAVE_PAGE_KEYS if HIBERNATION
select ARCH_STACKWALK
select ARCH_SUPPORTS_ATOMIC_RMW
select ARCH_SUPPORTS_NUMA_BALANCING
select ARCH_USE_BUILTIN_BSWAP
@ -236,6 +237,10 @@ config HAVE_MARCH_Z14_FEATURES
def_bool n
select HAVE_MARCH_Z13_FEATURES
config HAVE_MARCH_Z15_FEATURES
def_bool n
select HAVE_MARCH_Z14_FEATURES
choice
prompt "Processor type"
default MARCH_Z196
@ -307,6 +312,14 @@ config MARCH_Z14
and 3906 series). The kernel will be slightly faster but will not
work on older machines.
config MARCH_Z15
bool "IBM z15"
select HAVE_MARCH_Z15_FEATURES
help
Select this to enable optimizations for IBM z15 (8562
and 8561 series). The kernel will be slightly faster but will not
work on older machines.
endchoice
config MARCH_Z900_TUNE
@ -333,6 +346,9 @@ config MARCH_Z13_TUNE
config MARCH_Z14_TUNE
def_bool TUNE_Z14 || MARCH_Z14 && TUNE_DEFAULT
config MARCH_Z15_TUNE
def_bool TUNE_Z15 || MARCH_Z15 && TUNE_DEFAULT
choice
prompt "Tune code generation"
default TUNE_DEFAULT
@ -377,6 +393,9 @@ config TUNE_Z13
config TUNE_Z14
bool "IBM z14"
config TUNE_Z15
bool "IBM z15"
endchoice
config 64BIT

View File

@ -45,6 +45,7 @@ mflags-$(CONFIG_MARCH_Z196) := -march=z196
mflags-$(CONFIG_MARCH_ZEC12) := -march=zEC12
mflags-$(CONFIG_MARCH_Z13) := -march=z13
mflags-$(CONFIG_MARCH_Z14) := -march=z14
mflags-$(CONFIG_MARCH_Z15) := -march=z15
export CC_FLAGS_MARCH := $(mflags-y)
@ -59,6 +60,7 @@ cflags-$(CONFIG_MARCH_Z196_TUNE) += -mtune=z196
cflags-$(CONFIG_MARCH_ZEC12_TUNE) += -mtune=zEC12
cflags-$(CONFIG_MARCH_Z13_TUNE) += -mtune=z13
cflags-$(CONFIG_MARCH_Z14_TUNE) += -mtune=z14
cflags-$(CONFIG_MARCH_Z15_TUNE) += -mtune=z15
cflags-y += -Wa,-I$(srctree)/arch/$(ARCH)/include

View File

@ -36,7 +36,7 @@ CFLAGS_sclp_early_core.o += -I$(srctree)/drivers/s390/char
obj-y := head.o als.o startup.o mem_detect.o ipl_parm.o ipl_report.o
obj-y += string.o ebcdic.o sclp_early_core.o mem.o ipl_vmparm.o cmdline.o
obj-y += version.o ctype.o text_dma.o
obj-y += version.o pgm_check_info.o ctype.o text_dma.o
obj-$(CONFIG_PROTECTED_VIRTUALIZATION_GUEST) += uv.o
obj-$(CONFIG_RELOCATABLE) += machine_kexec_reloc.o
obj-$(CONFIG_RANDOMIZE_BASE) += kaslr.o

View File

@ -10,6 +10,7 @@ void parse_boot_command_line(void);
void setup_memory_end(void);
void verify_facilities(void);
void print_missing_facilities(void);
void print_pgm_check_info(void);
unsigned long get_random_base(unsigned long safe_addr);
extern int kaslr_enabled;

View File

@ -1,5 +1,2 @@
sizes.h
vmlinux
vmlinux.lds
vmlinux.scr.lds
vmlinux.bin.full

View File

@ -37,9 +37,9 @@ SECTIONS
* .dma section for code, data, ex_table that need to stay below 2 GB,
* even when the kernel is relocate: above 2 GB.
*/
. = ALIGN(PAGE_SIZE);
_sdma = .;
.dma.text : {
. = ALIGN(PAGE_SIZE);
_stext_dma = .;
*(.dma.text)
. = ALIGN(PAGE_SIZE);
@ -52,6 +52,7 @@ SECTIONS
_stop_dma_ex_table = .;
}
.dma.data : { *(.dma.data) }
. = ALIGN(PAGE_SIZE);
_edma = .;
BOOT_DATA

View File

@ -60,8 +60,10 @@ __HEAD
.long 0x02000690,0x60000050
.long 0x020006e0,0x20000050
.org 0x1a0
.org __LC_RST_NEW_PSW # 0x1a0
.quad 0,iplstart
.org __LC_PGM_NEW_PSW # 0x1d0
.quad 0x0000000180000000,startup_pgm_check_handler
.org 0x200
@ -351,6 +353,34 @@ ENTRY(startup_kdump)
#include "head_kdump.S"
#
# This program check is active immediately after kernel start
# and until early_pgm_check_handler is set in kernel/early.c
# It simply saves general/control registers and psw in
# the save area and does disabled wait with a faulty address.
#
ENTRY(startup_pgm_check_handler)
stmg %r0,%r15,__LC_SAVE_AREA_SYNC
la %r1,4095
stctg %c0,%c15,__LC_CREGS_SAVE_AREA-4095(%r1)
mvc __LC_GPREGS_SAVE_AREA-4095(128,%r1),__LC_SAVE_AREA_SYNC
mvc __LC_PSW_SAVE_AREA-4095(16,%r1),__LC_PGM_OLD_PSW
mvc __LC_RETURN_PSW(16),__LC_PGM_OLD_PSW
ni __LC_RETURN_PSW,0xfc # remove IO and EX bits
ni __LC_RETURN_PSW+1,0xfb # remove MCHK bit
oi __LC_RETURN_PSW+1,0x2 # set wait state bit
larl %r2,.Lold_psw_disabled_wait
stg %r2,__LC_PGM_NEW_PSW+8
l %r15,.Ldump_info_stack-.Lold_psw_disabled_wait(%r2)
brasl %r14,print_pgm_check_info
.Lold_psw_disabled_wait:
la %r1,4095
lmg %r0,%r15,__LC_GPREGS_SAVE_AREA-4095(%r1)
lpswe __LC_RETURN_PSW # disabled wait
.Ldump_info_stack:
.long 0x5000 + PAGE_SIZE - STACK_FRAME_OVERHEAD
ENDPROC(startup_pgm_check_handler)
#
# params at 10400 (setup.h)
# Must be keept in sync with struct parmarea in setup.h

View File

@ -7,6 +7,7 @@
#include <asm/sections.h>
#include <asm/boot_data.h>
#include <asm/facility.h>
#include <asm/pgtable.h>
#include <asm/uv.h>
#include "boot.h"
@ -14,6 +15,7 @@ char __bootdata(early_command_line)[COMMAND_LINE_SIZE];
struct ipl_parameter_block __bootdata_preserved(ipl_block);
int __bootdata_preserved(ipl_block_valid);
unsigned long __bootdata(vmalloc_size) = VMALLOC_DEFAULT_SIZE;
unsigned long __bootdata(memory_end);
int __bootdata(memory_end_set);
int __bootdata(noexec_disabled);
@ -219,18 +221,21 @@ void parse_boot_command_line(void)
while (*args) {
args = next_arg(args, &param, &val);
if (!strcmp(param, "mem")) {
memory_end = memparse(val, NULL);
if (!strcmp(param, "mem") && val) {
memory_end = round_down(memparse(val, NULL), PAGE_SIZE);
memory_end_set = 1;
}
if (!strcmp(param, "vmalloc") && val)
vmalloc_size = round_up(memparse(val, NULL), PAGE_SIZE);
if (!strcmp(param, "noexec")) {
rc = kstrtobool(val, &enabled);
if (!rc && !enabled)
noexec_disabled = 1;
}
if (!strcmp(param, "facilities"))
if (!strcmp(param, "facilities") && val)
modify_fac_list(val);
if (!strcmp(param, "nokaslr"))

View File

@ -3,6 +3,7 @@
* Copyright IBM Corp. 2019
*/
#include <asm/mem_detect.h>
#include <asm/pgtable.h>
#include <asm/cpacf.h>
#include <asm/timex.h>
#include <asm/sclp.h>
@ -90,8 +91,10 @@ static unsigned long get_random(unsigned long limit)
unsigned long get_random_base(unsigned long safe_addr)
{
unsigned long memory_limit = memory_end_set ? memory_end : 0;
unsigned long base, start, end, kernel_size;
unsigned long block_sum, offset;
unsigned long kasan_needs;
int i;
if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && INITRD_START && INITRD_SIZE) {
@ -100,14 +103,36 @@ unsigned long get_random_base(unsigned long safe_addr)
}
safe_addr = ALIGN(safe_addr, THREAD_SIZE);
if ((IS_ENABLED(CONFIG_KASAN))) {
/*
* Estimate kasan memory requirements, which it will reserve
* at the very end of available physical memory. To estimate
* that, we take into account that kasan would require
* 1/8 of available physical memory (for shadow memory) +
* creating page tables for the whole memory + shadow memory
* region (1 + 1/8). To keep page tables estimates simple take
* the double of combined ptes size.
*/
memory_limit = get_mem_detect_end();
if (memory_end_set && memory_limit > memory_end)
memory_limit = memory_end;
/* for shadow memory */
kasan_needs = memory_limit / 8;
/* for paging structures */
kasan_needs += (memory_limit + kasan_needs) / PAGE_SIZE /
_PAGE_ENTRIES * _PAGE_TABLE_SIZE * 2;
memory_limit -= kasan_needs;
}
kernel_size = vmlinux.image_size + vmlinux.bss_size;
block_sum = 0;
for_each_mem_detect_block(i, &start, &end) {
if (memory_end_set) {
if (start >= memory_end)
if (memory_limit) {
if (start >= memory_limit)
break;
if (end > memory_end)
end = memory_end;
if (end > memory_limit)
end = memory_limit;
}
if (end - start < kernel_size)
continue;
@ -125,11 +150,11 @@ unsigned long get_random_base(unsigned long safe_addr)
base = safe_addr;
block_sum = offset = 0;
for_each_mem_detect_block(i, &start, &end) {
if (memory_end_set) {
if (start >= memory_end)
if (memory_limit) {
if (start >= memory_limit)
break;
if (end > memory_end)
end = memory_end;
if (end > memory_limit)
end = memory_limit;
}
if (end - start < kernel_size)
continue;

View File

@ -63,13 +63,6 @@ void add_mem_detect_block(u64 start, u64 end)
mem_detect.count++;
}
static unsigned long get_mem_detect_end(void)
{
if (mem_detect.count)
return __get_mem_detect_block_ptr(mem_detect.count - 1)->end;
return 0;
}
static int __diag260(unsigned long rx1, unsigned long rx2)
{
register unsigned long _rx1 asm("2") = rx1;

View File

@ -0,0 +1,90 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/kernel.h>
#include <linux/string.h>
#include <asm/lowcore.h>
#include <asm/sclp.h>
#include "boot.h"
const char hex_asc[] = "0123456789abcdef";
#define add_val_as_hex(dst, val) \
__add_val_as_hex(dst, (const unsigned char *)&val, sizeof(val))
static char *__add_val_as_hex(char *dst, const unsigned char *src, size_t count)
{
while (count--)
dst = hex_byte_pack(dst, *src++);
return dst;
}
static char *add_str(char *dst, char *src)
{
strcpy(dst, src);
return dst + strlen(dst);
}
void print_pgm_check_info(void)
{
struct psw_bits *psw = &psw_bits(S390_lowcore.psw_save_area);
unsigned short ilc = S390_lowcore.pgm_ilc >> 1;
char buf[256];
int row, col;
char *p;
add_str(buf, "Linux version ");
strlcat(buf, kernel_version, sizeof(buf));
sclp_early_printk(buf);
p = add_str(buf, "Kernel fault: interruption code ");
p = add_val_as_hex(buf + strlen(buf), S390_lowcore.pgm_code);
p = add_str(p, " ilc:");
*p++ = hex_asc_lo(ilc);
add_str(p, "\n");
sclp_early_printk(buf);
p = add_str(buf, "PSW : ");
p = add_val_as_hex(p, S390_lowcore.psw_save_area.mask);
p = add_str(p, " ");
p = add_val_as_hex(p, S390_lowcore.psw_save_area.addr);
add_str(p, "\n");
sclp_early_printk(buf);
p = add_str(buf, " R:");
*p++ = hex_asc_lo(psw->per);
p = add_str(p, " T:");
*p++ = hex_asc_lo(psw->dat);
p = add_str(p, " IO:");
*p++ = hex_asc_lo(psw->io);
p = add_str(p, " EX:");
*p++ = hex_asc_lo(psw->ext);
p = add_str(p, " Key:");
*p++ = hex_asc_lo(psw->key);
p = add_str(p, " M:");
*p++ = hex_asc_lo(psw->mcheck);
p = add_str(p, " W:");
*p++ = hex_asc_lo(psw->wait);
p = add_str(p, " P:");
*p++ = hex_asc_lo(psw->pstate);
p = add_str(p, " AS:");
*p++ = hex_asc_lo(psw->as);
p = add_str(p, " CC:");
*p++ = hex_asc_lo(psw->cc);
p = add_str(p, " PM:");
*p++ = hex_asc_lo(psw->pm);
p = add_str(p, " RI:");
*p++ = hex_asc_lo(psw->ri);
p = add_str(p, " EA:");
*p++ = hex_asc_lo(psw->eaba);
add_str(p, "\n");
sclp_early_printk(buf);
for (row = 0; row < 4; row++) {
p = add_str(buf, row == 0 ? "GPRS:" : " ");
for (col = 0; col < 4; col++) {
p = add_str(p, " ");
p = add_val_as_hex(p, S390_lowcore.gpregs_save_area[row * 4 + col]);
}
add_str(p, "\n");
sclp_early_printk(buf);
}
}

View File

@ -112,6 +112,11 @@ static void handle_relocs(unsigned long offset)
}
}
static void clear_bss_section(void)
{
memset((void *)vmlinux.default_lma + vmlinux.image_size, 0, vmlinux.bss_size);
}
void startup_kernel(void)
{
unsigned long random_lma;
@ -151,6 +156,7 @@ void startup_kernel(void)
} else if (__kaslr_offset)
memcpy((void *)vmlinux.default_lma, img, vmlinux.image_size);
clear_bss_section();
copy_bootdata();
if (IS_ENABLED(CONFIG_RELOCATABLE))
handle_relocs(__kaslr_offset);

View File

@ -717,6 +717,8 @@ CONFIG_CRYPTO_PAES_S390=m
CONFIG_CRYPTO_SHA1_S390=m
CONFIG_CRYPTO_SHA256_S390=m
CONFIG_CRYPTO_SHA512_S390=m
CONFIG_CRYPTO_SHA3_256_S390=m
CONFIG_CRYPTO_SHA3_512_S390=m
CONFIG_CRYPTO_DES_S390=m
CONFIG_CRYPTO_AES_S390=m
CONFIG_CRYPTO_GHASH_S390=m

View File

@ -710,6 +710,8 @@ CONFIG_CRYPTO_PAES_S390=m
CONFIG_CRYPTO_SHA1_S390=m
CONFIG_CRYPTO_SHA256_S390=m
CONFIG_CRYPTO_SHA512_S390=m
CONFIG_CRYPTO_SHA3_256_S390=m
CONFIG_CRYPTO_SHA3_512_S390=m
CONFIG_CRYPTO_DES_S390=m
CONFIG_CRYPTO_AES_S390=m
CONFIG_CRYPTO_GHASH_S390=m

View File

@ -6,6 +6,8 @@
obj-$(CONFIG_CRYPTO_SHA1_S390) += sha1_s390.o sha_common.o
obj-$(CONFIG_CRYPTO_SHA256_S390) += sha256_s390.o sha_common.o
obj-$(CONFIG_CRYPTO_SHA512_S390) += sha512_s390.o sha_common.o
obj-$(CONFIG_CRYPTO_SHA3_256_S390) += sha3_256_s390.o sha_common.o
obj-$(CONFIG_CRYPTO_SHA3_512_S390) += sha3_512_s390.o sha_common.o
obj-$(CONFIG_CRYPTO_DES_S390) += des_s390.o
obj-$(CONFIG_CRYPTO_AES_S390) += aes_s390.o
obj-$(CONFIG_CRYPTO_PAES_S390) += paes_s390.o

View File

@ -586,6 +586,9 @@ static int xts_aes_encrypt(struct blkcipher_desc *desc,
struct s390_xts_ctx *xts_ctx = crypto_blkcipher_ctx(desc->tfm);
struct blkcipher_walk walk;
if (!nbytes)
return -EINVAL;
if (unlikely(!xts_ctx->fc))
return xts_fallback_encrypt(desc, dst, src, nbytes);
@ -600,6 +603,9 @@ static int xts_aes_decrypt(struct blkcipher_desc *desc,
struct s390_xts_ctx *xts_ctx = crypto_blkcipher_ctx(desc->tfm);
struct blkcipher_walk walk;
if (!nbytes)
return -EINVAL;
if (unlikely(!xts_ctx->fc))
return xts_fallback_decrypt(desc, dst, src, nbytes);

View File

@ -5,7 +5,7 @@
* s390 implementation of the AES Cipher Algorithm with protected keys.
*
* s390 Version:
* Copyright IBM Corp. 2017
* Copyright IBM Corp. 2017,2019
* Author(s): Martin Schwidefsky <schwidefsky@de.ibm.com>
* Harald Freudenberger <freude@de.ibm.com>
*/
@ -25,16 +25,59 @@
#include <asm/cpacf.h>
#include <asm/pkey.h>
/*
* Key blobs smaller/bigger than these defines are rejected
* by the common code even before the individual setkey function
* is called. As paes can handle different kinds of key blobs
* and padding is also possible, the limits need to be generous.
*/
#define PAES_MIN_KEYSIZE 64
#define PAES_MAX_KEYSIZE 256
static u8 *ctrblk;
static DEFINE_SPINLOCK(ctrblk_lock);
static cpacf_mask_t km_functions, kmc_functions, kmctr_functions;
struct key_blob {
__u8 key[MAXKEYBLOBSIZE];
/*
* Small keys will be stored in the keybuf. Larger keys are
* stored in extra allocated memory. In both cases does
* key point to the memory where the key is stored.
* The code distinguishes by checking keylen against
* sizeof(keybuf). See the two following helper functions.
*/
u8 *key;
u8 keybuf[128];
unsigned int keylen;
};
static inline int _copy_key_to_kb(struct key_blob *kb,
const u8 *key,
unsigned int keylen)
{
if (keylen <= sizeof(kb->keybuf))
kb->key = kb->keybuf;
else {
kb->key = kmalloc(keylen, GFP_KERNEL);
if (!kb->key)
return -ENOMEM;
}
memcpy(kb->key, key, keylen);
kb->keylen = keylen;
return 0;
}
static inline void _free_kb_keybuf(struct key_blob *kb)
{
if (kb->key && kb->key != kb->keybuf
&& kb->keylen > sizeof(kb->keybuf)) {
kfree(kb->key);
kb->key = NULL;
}
}
struct s390_paes_ctx {
struct key_blob kb;
struct pkey_protkey pk;
@ -80,13 +123,33 @@ static int __paes_set_key(struct s390_paes_ctx *ctx)
return ctx->fc ? 0 : -EINVAL;
}
static int ecb_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
unsigned int key_len)
static int ecb_paes_init(struct crypto_tfm *tfm)
{
struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
memcpy(ctx->kb.key, in_key, key_len);
ctx->kb.keylen = key_len;
ctx->kb.key = NULL;
return 0;
}
static void ecb_paes_exit(struct crypto_tfm *tfm)
{
struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
_free_kb_keybuf(&ctx->kb);
}
static int ecb_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
unsigned int key_len)
{
int rc;
struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
_free_kb_keybuf(&ctx->kb);
rc = _copy_key_to_kb(&ctx->kb, in_key, key_len);
if (rc)
return rc;
if (__paes_set_key(ctx)) {
tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
return -EINVAL;
@ -148,10 +211,12 @@ static struct crypto_alg ecb_paes_alg = {
.cra_type = &crypto_blkcipher_type,
.cra_module = THIS_MODULE,
.cra_list = LIST_HEAD_INIT(ecb_paes_alg.cra_list),
.cra_init = ecb_paes_init,
.cra_exit = ecb_paes_exit,
.cra_u = {
.blkcipher = {
.min_keysize = MINKEYBLOBSIZE,
.max_keysize = MAXKEYBLOBSIZE,
.min_keysize = PAES_MIN_KEYSIZE,
.max_keysize = PAES_MAX_KEYSIZE,
.setkey = ecb_paes_set_key,
.encrypt = ecb_paes_encrypt,
.decrypt = ecb_paes_decrypt,
@ -159,6 +224,22 @@ static struct crypto_alg ecb_paes_alg = {
}
};
static int cbc_paes_init(struct crypto_tfm *tfm)
{
struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
ctx->kb.key = NULL;
return 0;
}
static void cbc_paes_exit(struct crypto_tfm *tfm)
{
struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
_free_kb_keybuf(&ctx->kb);
}
static int __cbc_paes_set_key(struct s390_paes_ctx *ctx)
{
unsigned long fc;
@ -180,10 +261,14 @@ static int __cbc_paes_set_key(struct s390_paes_ctx *ctx)
static int cbc_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
unsigned int key_len)
{
int rc;
struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
memcpy(ctx->kb.key, in_key, key_len);
ctx->kb.keylen = key_len;
_free_kb_keybuf(&ctx->kb);
rc = _copy_key_to_kb(&ctx->kb, in_key, key_len);
if (rc)
return rc;
if (__cbc_paes_set_key(ctx)) {
tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
return -EINVAL;
@ -252,10 +337,12 @@ static struct crypto_alg cbc_paes_alg = {
.cra_type = &crypto_blkcipher_type,
.cra_module = THIS_MODULE,
.cra_list = LIST_HEAD_INIT(cbc_paes_alg.cra_list),
.cra_init = cbc_paes_init,
.cra_exit = cbc_paes_exit,
.cra_u = {
.blkcipher = {
.min_keysize = MINKEYBLOBSIZE,
.max_keysize = MAXKEYBLOBSIZE,
.min_keysize = PAES_MIN_KEYSIZE,
.max_keysize = PAES_MAX_KEYSIZE,
.ivsize = AES_BLOCK_SIZE,
.setkey = cbc_paes_set_key,
.encrypt = cbc_paes_encrypt,
@ -264,6 +351,24 @@ static struct crypto_alg cbc_paes_alg = {
}
};
static int xts_paes_init(struct crypto_tfm *tfm)
{
struct s390_pxts_ctx *ctx = crypto_tfm_ctx(tfm);
ctx->kb[0].key = NULL;
ctx->kb[1].key = NULL;
return 0;
}
static void xts_paes_exit(struct crypto_tfm *tfm)
{
struct s390_pxts_ctx *ctx = crypto_tfm_ctx(tfm);
_free_kb_keybuf(&ctx->kb[0]);
_free_kb_keybuf(&ctx->kb[1]);
}
static int __xts_paes_set_key(struct s390_pxts_ctx *ctx)
{
unsigned long fc;
@ -287,20 +392,27 @@ static int __xts_paes_set_key(struct s390_pxts_ctx *ctx)
}
static int xts_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
unsigned int key_len)
unsigned int xts_key_len)
{
int rc;
struct s390_pxts_ctx *ctx = crypto_tfm_ctx(tfm);
u8 ckey[2 * AES_MAX_KEY_SIZE];
unsigned int ckey_len, keytok_len;
unsigned int ckey_len, key_len;
if (key_len % 2)
if (xts_key_len % 2)
return -EINVAL;
keytok_len = key_len / 2;
memcpy(ctx->kb[0].key, in_key, keytok_len);
ctx->kb[0].keylen = keytok_len;
memcpy(ctx->kb[1].key, in_key + keytok_len, keytok_len);
ctx->kb[1].keylen = keytok_len;
key_len = xts_key_len / 2;
_free_kb_keybuf(&ctx->kb[0]);
_free_kb_keybuf(&ctx->kb[1]);
rc = _copy_key_to_kb(&ctx->kb[0], in_key, key_len);
if (rc)
return rc;
rc = _copy_key_to_kb(&ctx->kb[1], in_key + key_len, key_len);
if (rc)
return rc;
if (__xts_paes_set_key(ctx)) {
tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
return -EINVAL;
@ -394,10 +506,12 @@ static struct crypto_alg xts_paes_alg = {
.cra_type = &crypto_blkcipher_type,
.cra_module = THIS_MODULE,
.cra_list = LIST_HEAD_INIT(xts_paes_alg.cra_list),
.cra_init = xts_paes_init,
.cra_exit = xts_paes_exit,
.cra_u = {
.blkcipher = {
.min_keysize = 2 * MINKEYBLOBSIZE,
.max_keysize = 2 * MAXKEYBLOBSIZE,
.min_keysize = 2 * PAES_MIN_KEYSIZE,
.max_keysize = 2 * PAES_MAX_KEYSIZE,
.ivsize = AES_BLOCK_SIZE,
.setkey = xts_paes_set_key,
.encrypt = xts_paes_encrypt,
@ -406,6 +520,22 @@ static struct crypto_alg xts_paes_alg = {
}
};
static int ctr_paes_init(struct crypto_tfm *tfm)
{
struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
ctx->kb.key = NULL;
return 0;
}
static void ctr_paes_exit(struct crypto_tfm *tfm)
{
struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
_free_kb_keybuf(&ctx->kb);
}
static int __ctr_paes_set_key(struct s390_paes_ctx *ctx)
{
unsigned long fc;
@ -428,10 +558,14 @@ static int __ctr_paes_set_key(struct s390_paes_ctx *ctx)
static int ctr_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
unsigned int key_len)
{
int rc;
struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
memcpy(ctx->kb.key, in_key, key_len);
ctx->kb.keylen = key_len;
_free_kb_keybuf(&ctx->kb);
rc = _copy_key_to_kb(&ctx->kb, in_key, key_len);
if (rc)
return rc;
if (__ctr_paes_set_key(ctx)) {
tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
return -EINVAL;
@ -541,10 +675,12 @@ static struct crypto_alg ctr_paes_alg = {
.cra_type = &crypto_blkcipher_type,
.cra_module = THIS_MODULE,
.cra_list = LIST_HEAD_INIT(ctr_paes_alg.cra_list),
.cra_init = ctr_paes_init,
.cra_exit = ctr_paes_exit,
.cra_u = {
.blkcipher = {
.min_keysize = MINKEYBLOBSIZE,
.max_keysize = MAXKEYBLOBSIZE,
.min_keysize = PAES_MIN_KEYSIZE,
.max_keysize = PAES_MAX_KEYSIZE,
.ivsize = AES_BLOCK_SIZE,
.setkey = ctr_paes_set_key,
.encrypt = ctr_paes_encrypt,

View File

@ -12,15 +12,17 @@
#include <linux/crypto.h>
#include <crypto/sha.h>
#include <crypto/sha3.h>
/* must be big enough for the largest SHA variant */
#define SHA_MAX_STATE_SIZE (SHA512_DIGEST_SIZE / 4)
#define SHA_MAX_BLOCK_SIZE SHA512_BLOCK_SIZE
#define SHA3_STATE_SIZE 200
#define CPACF_MAX_PARMBLOCK_SIZE SHA3_STATE_SIZE
#define SHA_MAX_BLOCK_SIZE SHA3_224_BLOCK_SIZE
struct s390_sha_ctx {
u64 count; /* message length in bytes */
u32 state[SHA_MAX_STATE_SIZE];
u8 buf[2 * SHA_MAX_BLOCK_SIZE];
u64 count; /* message length in bytes */
u32 state[CPACF_MAX_PARMBLOCK_SIZE / sizeof(u32)];
u8 buf[SHA_MAX_BLOCK_SIZE];
int func; /* KIMD function to use */
};

View File

@ -0,0 +1,147 @@
// SPDX-License-Identifier: GPL-2.0+
/*
* Cryptographic API.
*
* s390 implementation of the SHA256 and SHA224 Secure Hash Algorithm.
*
* s390 Version:
* Copyright IBM Corp. 2019
* Author(s): Joerg Schmidbauer (jschmidb@de.ibm.com)
*/
#include <crypto/internal/hash.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/cpufeature.h>
#include <crypto/sha.h>
#include <crypto/sha3.h>
#include <asm/cpacf.h>
#include "sha.h"
static int sha3_256_init(struct shash_desc *desc)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
memset(sctx->state, 0, sizeof(sctx->state));
sctx->count = 0;
sctx->func = CPACF_KIMD_SHA3_256;
return 0;
}
static int sha3_256_export(struct shash_desc *desc, void *out)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
struct sha3_state *octx = out;
octx->rsiz = sctx->count;
memcpy(octx->st, sctx->state, sizeof(octx->st));
memcpy(octx->buf, sctx->buf, sizeof(octx->buf));
return 0;
}
static int sha3_256_import(struct shash_desc *desc, const void *in)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
const struct sha3_state *ictx = in;
sctx->count = ictx->rsiz;
memcpy(sctx->state, ictx->st, sizeof(ictx->st));
memcpy(sctx->buf, ictx->buf, sizeof(ictx->buf));
sctx->func = CPACF_KIMD_SHA3_256;
return 0;
}
static int sha3_224_import(struct shash_desc *desc, const void *in)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
const struct sha3_state *ictx = in;
sctx->count = ictx->rsiz;
memcpy(sctx->state, ictx->st, sizeof(ictx->st));
memcpy(sctx->buf, ictx->buf, sizeof(ictx->buf));
sctx->func = CPACF_KIMD_SHA3_224;
return 0;
}
static struct shash_alg sha3_256_alg = {
.digestsize = SHA3_256_DIGEST_SIZE, /* = 32 */
.init = sha3_256_init,
.update = s390_sha_update,
.final = s390_sha_final,
.export = sha3_256_export,
.import = sha3_256_import,
.descsize = sizeof(struct s390_sha_ctx),
.statesize = sizeof(struct sha3_state),
.base = {
.cra_name = "sha3-256",
.cra_driver_name = "sha3-256-s390",
.cra_priority = 300,
.cra_blocksize = SHA3_256_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
};
static int sha3_224_init(struct shash_desc *desc)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
memset(sctx->state, 0, sizeof(sctx->state));
sctx->count = 0;
sctx->func = CPACF_KIMD_SHA3_224;
return 0;
}
static struct shash_alg sha3_224_alg = {
.digestsize = SHA3_224_DIGEST_SIZE,
.init = sha3_224_init,
.update = s390_sha_update,
.final = s390_sha_final,
.export = sha3_256_export, /* same as for 256 */
.import = sha3_224_import, /* function code different! */
.descsize = sizeof(struct s390_sha_ctx),
.statesize = sizeof(struct sha3_state),
.base = {
.cra_name = "sha3-224",
.cra_driver_name = "sha3-224-s390",
.cra_priority = 300,
.cra_blocksize = SHA3_224_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
};
static int __init sha3_256_s390_init(void)
{
int ret;
if (!cpacf_query_func(CPACF_KIMD, CPACF_KIMD_SHA3_256))
return -ENODEV;
ret = crypto_register_shash(&sha3_256_alg);
if (ret < 0)
goto out;
ret = crypto_register_shash(&sha3_224_alg);
if (ret < 0)
crypto_unregister_shash(&sha3_256_alg);
out:
return ret;
}
static void __exit sha3_256_s390_fini(void)
{
crypto_unregister_shash(&sha3_224_alg);
crypto_unregister_shash(&sha3_256_alg);
}
module_cpu_feature_match(MSA, sha3_256_s390_init);
module_exit(sha3_256_s390_fini);
MODULE_ALIAS_CRYPTO("sha3-256");
MODULE_ALIAS_CRYPTO("sha3-224");
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("SHA3-256 and SHA3-224 Secure Hash Algorithm");

View File

@ -0,0 +1,155 @@
// SPDX-License-Identifier: GPL-2.0+
/*
* Cryptographic API.
*
* s390 implementation of the SHA512 and SHA384 Secure Hash Algorithm.
*
* Copyright IBM Corp. 2019
* Author(s): Joerg Schmidbauer (jschmidb@de.ibm.com)
*/
#include <crypto/internal/hash.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/cpufeature.h>
#include <crypto/sha.h>
#include <crypto/sha3.h>
#include <asm/cpacf.h>
#include "sha.h"
static int sha3_512_init(struct shash_desc *desc)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
memset(sctx->state, 0, sizeof(sctx->state));
sctx->count = 0;
sctx->func = CPACF_KIMD_SHA3_512;
return 0;
}
static int sha3_512_export(struct shash_desc *desc, void *out)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
struct sha3_state *octx = out;
octx->rsiz = sctx->count;
octx->rsizw = sctx->count >> 32;
memcpy(octx->st, sctx->state, sizeof(octx->st));
memcpy(octx->buf, sctx->buf, sizeof(octx->buf));
return 0;
}
static int sha3_512_import(struct shash_desc *desc, const void *in)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
const struct sha3_state *ictx = in;
if (unlikely(ictx->rsizw))
return -ERANGE;
sctx->count = ictx->rsiz;
memcpy(sctx->state, ictx->st, sizeof(ictx->st));
memcpy(sctx->buf, ictx->buf, sizeof(ictx->buf));
sctx->func = CPACF_KIMD_SHA3_512;
return 0;
}
static int sha3_384_import(struct shash_desc *desc, const void *in)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
const struct sha3_state *ictx = in;
if (unlikely(ictx->rsizw))
return -ERANGE;
sctx->count = ictx->rsiz;
memcpy(sctx->state, ictx->st, sizeof(ictx->st));
memcpy(sctx->buf, ictx->buf, sizeof(ictx->buf));
sctx->func = CPACF_KIMD_SHA3_384;
return 0;
}
static struct shash_alg sha3_512_alg = {
.digestsize = SHA3_512_DIGEST_SIZE,
.init = sha3_512_init,
.update = s390_sha_update,
.final = s390_sha_final,
.export = sha3_512_export,
.import = sha3_512_import,
.descsize = sizeof(struct s390_sha_ctx),
.statesize = sizeof(struct sha3_state),
.base = {
.cra_name = "sha3-512",
.cra_driver_name = "sha3-512-s390",
.cra_priority = 300,
.cra_blocksize = SHA3_512_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
};
MODULE_ALIAS_CRYPTO("sha3-512");
static int sha3_384_init(struct shash_desc *desc)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
memset(sctx->state, 0, sizeof(sctx->state));
sctx->count = 0;
sctx->func = CPACF_KIMD_SHA3_384;
return 0;
}
static struct shash_alg sha3_384_alg = {
.digestsize = SHA3_384_DIGEST_SIZE,
.init = sha3_384_init,
.update = s390_sha_update,
.final = s390_sha_final,
.export = sha3_512_export, /* same as for 512 */
.import = sha3_384_import, /* function code different! */
.descsize = sizeof(struct s390_sha_ctx),
.statesize = sizeof(struct sha3_state),
.base = {
.cra_name = "sha3-384",
.cra_driver_name = "sha3-384-s390",
.cra_priority = 300,
.cra_blocksize = SHA3_384_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct s390_sha_ctx),
.cra_module = THIS_MODULE,
}
};
MODULE_ALIAS_CRYPTO("sha3-384");
static int __init init(void)
{
int ret;
if (!cpacf_query_func(CPACF_KIMD, CPACF_KIMD_SHA3_512))
return -ENODEV;
ret = crypto_register_shash(&sha3_512_alg);
if (ret < 0)
goto out;
ret = crypto_register_shash(&sha3_384_alg);
if (ret < 0)
crypto_unregister_shash(&sha3_512_alg);
out:
return ret;
}
static void __exit fini(void)
{
crypto_unregister_shash(&sha3_512_alg);
crypto_unregister_shash(&sha3_384_alg);
}
module_cpu_feature_match(MSA, init);
module_exit(fini);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("SHA3-512 and SHA3-384 Secure Hash Algorithm");

View File

@ -20,7 +20,7 @@ int s390_sha_update(struct shash_desc *desc, const u8 *data, unsigned int len)
unsigned int index, n;
/* how much is already in the buffer? */
index = ctx->count & (bsize - 1);
index = ctx->count % bsize;
ctx->count += len;
if ((index + len) < bsize)
@ -37,7 +37,7 @@ int s390_sha_update(struct shash_desc *desc, const u8 *data, unsigned int len)
/* process as many blocks as possible */
if (len >= bsize) {
n = len & ~(bsize - 1);
n = (len / bsize) * bsize;
cpacf_kimd(ctx->func, ctx->state, data, n);
data += n;
len -= n;
@ -50,34 +50,63 @@ store:
}
EXPORT_SYMBOL_GPL(s390_sha_update);
static int s390_crypto_shash_parmsize(int func)
{
switch (func) {
case CPACF_KLMD_SHA_1:
return 20;
case CPACF_KLMD_SHA_256:
return 32;
case CPACF_KLMD_SHA_512:
return 64;
case CPACF_KLMD_SHA3_224:
case CPACF_KLMD_SHA3_256:
case CPACF_KLMD_SHA3_384:
case CPACF_KLMD_SHA3_512:
return 200;
default:
return -EINVAL;
}
}
int s390_sha_final(struct shash_desc *desc, u8 *out)
{
struct s390_sha_ctx *ctx = shash_desc_ctx(desc);
unsigned int bsize = crypto_shash_blocksize(desc->tfm);
u64 bits;
unsigned int index, end, plen;
unsigned int n, mbl_offset;
/* SHA-512 uses 128 bit padding length */
plen = (bsize > SHA256_BLOCK_SIZE) ? 16 : 8;
/* must perform manual padding */
index = ctx->count & (bsize - 1);
end = (index < bsize - plen) ? bsize : (2 * bsize);
/* start pad with 1 */
ctx->buf[index] = 0x80;
index++;
/* pad with zeros */
memset(ctx->buf + index, 0x00, end - index - 8);
/*
* Append message length. Well, SHA-512 wants a 128 bit length value,
* nevertheless we use u64, should be enough for now...
*/
n = ctx->count % bsize;
bits = ctx->count * 8;
memcpy(ctx->buf + end - 8, &bits, sizeof(bits));
cpacf_kimd(ctx->func, ctx->state, ctx->buf, end);
mbl_offset = s390_crypto_shash_parmsize(ctx->func) / sizeof(u32);
if (mbl_offset < 0)
return -EINVAL;
/* set total msg bit length (mbl) in CPACF parmblock */
switch (ctx->func) {
case CPACF_KLMD_SHA_1:
case CPACF_KLMD_SHA_256:
memcpy(ctx->state + mbl_offset, &bits, sizeof(bits));
break;
case CPACF_KLMD_SHA_512:
/*
* the SHA512 parmblock has a 128-bit mbl field, clear
* high-order u64 field, copy bits to low-order u64 field
*/
memset(ctx->state + mbl_offset, 0x00, sizeof(bits));
mbl_offset += sizeof(u64) / sizeof(u32);
memcpy(ctx->state + mbl_offset, &bits, sizeof(bits));
break;
case CPACF_KLMD_SHA3_224:
case CPACF_KLMD_SHA3_256:
case CPACF_KLMD_SHA3_384:
case CPACF_KLMD_SHA3_512:
break;
default:
return -EINVAL;
}
cpacf_klmd(ctx->func, ctx->state, ctx->buf, n);
/* copy digest to out */
memcpy(out, ctx->state, crypto_shash_digestsize(desc->tfm));

View File

@ -93,6 +93,10 @@
#define CPACF_KIMD_SHA_1 0x01
#define CPACF_KIMD_SHA_256 0x02
#define CPACF_KIMD_SHA_512 0x03
#define CPACF_KIMD_SHA3_224 0x20
#define CPACF_KIMD_SHA3_256 0x21
#define CPACF_KIMD_SHA3_384 0x22
#define CPACF_KIMD_SHA3_512 0x23
#define CPACF_KIMD_GHASH 0x41
/*
@ -103,6 +107,10 @@
#define CPACF_KLMD_SHA_1 0x01
#define CPACF_KLMD_SHA_256 0x02
#define CPACF_KLMD_SHA_512 0x03
#define CPACF_KLMD_SHA3_224 0x20
#define CPACF_KLMD_SHA3_256 0x21
#define CPACF_KLMD_SHA3_384 0x22
#define CPACF_KLMD_SHA3_512 0x23
/*
* function codes for the KMAC (COMPUTE MESSAGE AUTHENTICATION CODE)

View File

@ -9,6 +9,8 @@
#ifndef _ASM_S390_GMAP_H
#define _ASM_S390_GMAP_H
#include <linux/refcount.h>
/* Generic bits for GMAP notification on DAT table entry changes. */
#define GMAP_NOTIFY_SHADOW 0x2
#define GMAP_NOTIFY_MPROT 0x1
@ -46,7 +48,7 @@ struct gmap {
struct radix_tree_root guest_to_host;
struct radix_tree_root host_to_guest;
spinlock_t guest_table_lock;
atomic_t ref_count;
refcount_t ref_count;
unsigned long *table;
unsigned long asce;
unsigned long asce_end;

View File

@ -79,4 +79,16 @@ static inline void get_mem_detect_reserved(unsigned long *start,
*size = 0;
}
static inline unsigned long get_mem_detect_end(void)
{
unsigned long start;
unsigned long end;
if (mem_detect.count) {
__get_mem_detect_block(mem_detect.count - 1, &start, &end);
return end;
}
return 0;
}
#endif

View File

@ -86,6 +86,7 @@ extern unsigned long zero_page_mask;
*/
extern unsigned long VMALLOC_START;
extern unsigned long VMALLOC_END;
#define VMALLOC_DEFAULT_SIZE ((128UL << 30) - MODULES_LEN)
extern struct page *vmemmap;
#define VMEM_MAX_PHYS ((unsigned long) vmemmap)

View File

@ -2,7 +2,7 @@
/*
* Kernelspace interface to the pkey device driver
*
* Copyright IBM Corp. 2016
* Copyright IBM Corp. 2016,2019
*
* Author: Harald Freudenberger <freude@de.ibm.com>
*
@ -15,116 +15,6 @@
#include <linux/types.h>
#include <uapi/asm/pkey.h>
/*
* Generate (AES) random secure key.
* @param cardnr may be -1 (use default card)
* @param domain may be -1 (use default domain)
* @param keytype one of the PKEY_KEYTYPE values
* @param seckey pointer to buffer receiving the secure key
* @return 0 on success, negative errno value on failure
*/
int pkey_genseckey(__u16 cardnr, __u16 domain,
__u32 keytype, struct pkey_seckey *seckey);
/*
* Generate (AES) secure key with given key value.
* @param cardnr may be -1 (use default card)
* @param domain may be -1 (use default domain)
* @param keytype one of the PKEY_KEYTYPE values
* @param clrkey pointer to buffer with clear key data
* @param seckey pointer to buffer receiving the secure key
* @return 0 on success, negative errno value on failure
*/
int pkey_clr2seckey(__u16 cardnr, __u16 domain, __u32 keytype,
const struct pkey_clrkey *clrkey,
struct pkey_seckey *seckey);
/*
* Derive (AES) proteced key from the (AES) secure key blob.
* @param cardnr may be -1 (use default card)
* @param domain may be -1 (use default domain)
* @param seckey pointer to buffer with the input secure key
* @param protkey pointer to buffer receiving the protected key and
* additional info (type, length)
* @return 0 on success, negative errno value on failure
*/
int pkey_sec2protkey(__u16 cardnr, __u16 domain,
const struct pkey_seckey *seckey,
struct pkey_protkey *protkey);
/*
* Derive (AES) protected key from a given clear key value.
* @param keytype one of the PKEY_KEYTYPE values
* @param clrkey pointer to buffer with clear key data
* @param protkey pointer to buffer receiving the protected key and
* additional info (type, length)
* @return 0 on success, negative errno value on failure
*/
int pkey_clr2protkey(__u32 keytype,
const struct pkey_clrkey *clrkey,
struct pkey_protkey *protkey);
/*
* Search for a matching crypto card based on the Master Key
* Verification Pattern provided inside a secure key.
* @param seckey pointer to buffer with the input secure key
* @param cardnr pointer to cardnr, receives the card number on success
* @param domain pointer to domain, receives the domain number on success
* @param verify if set, always verify by fetching verification pattern
* from card
* @return 0 on success, negative errno value on failure. If no card could be
* found, -ENODEV is returned.
*/
int pkey_findcard(const struct pkey_seckey *seckey,
__u16 *cardnr, __u16 *domain, int verify);
/*
* Find card and transform secure key to protected key.
* @param seckey pointer to buffer with the input secure key
* @param protkey pointer to buffer receiving the protected key and
* additional info (type, length)
* @return 0 on success, negative errno value on failure
*/
int pkey_skey2pkey(const struct pkey_seckey *seckey,
struct pkey_protkey *protkey);
/*
* Verify the given secure key for being able to be useable with
* the pkey module. Check for correct key type and check for having at
* least one crypto card being able to handle this key (master key
* or old master key verification pattern matches).
* Return some info about the key: keysize in bits, keytype (currently
* only AES), flag if key is wrapped with an old MKVP.
* @param seckey pointer to buffer with the input secure key
* @param pcardnr pointer to cardnr, receives the card number on success
* @param pdomain pointer to domain, receives the domain number on success
* @param pkeysize pointer to keysize, receives the bitsize of the key
* @param pattributes pointer to attributes, receives additional info
* PKEY_VERIFY_ATTR_AES if the key is an AES key
* PKEY_VERIFY_ATTR_OLD_MKVP if key has old mkvp stored in
* @return 0 on success, negative errno value on failure. If no card could
* be found which is able to handle this key, -ENODEV is returned.
*/
int pkey_verifykey(const struct pkey_seckey *seckey,
u16 *pcardnr, u16 *pdomain,
u16 *pkeysize, u32 *pattributes);
/*
* In-kernel API: Generate (AES) random protected key.
* @param keytype one of the PKEY_KEYTYPE values
* @param protkey pointer to buffer receiving the protected key
* @return 0 on success, negative errno value on failure
*/
int pkey_genprotkey(__u32 keytype, struct pkey_protkey *protkey);
/*
* In-kernel API: Verify an (AES) protected key.
* @param protkey pointer to buffer containing the protected key to verify
* @return 0 on success, negative errno value on failure. In case the protected
* key is not valid -EKEYREJECTED is returned
*/
int pkey_verifyprotkey(const struct pkey_protkey *protkey);
/*
* In-kernel API: Transform an key blob (of any type) into a protected key.
* @param key pointer to a buffer containing the key blob
@ -132,7 +22,7 @@ int pkey_verifyprotkey(const struct pkey_protkey *protkey);
* @param protkey pointer to buffer receiving the protected key
* @return 0 on success, negative errno value on failure
*/
int pkey_keyblob2pkey(const __u8 *key, __u32 keylen,
int pkey_keyblob2pkey(const u8 *key, u32 keylen,
struct pkey_protkey *protkey);
#endif /* _KAPI_PKEY_H */

View File

@ -324,11 +324,9 @@ static inline void __noreturn disabled_wait(void)
* Basic Machine Check/Program Check Handler.
*/
extern void s390_base_mcck_handler(void);
extern void s390_base_pgm_handler(void);
extern void s390_base_ext_handler(void);
extern void (*s390_base_mcck_handler_fn)(void);
extern void (*s390_base_pgm_handler_fn)(void);
extern void (*s390_base_ext_handler_fn)(void);

View File

@ -83,6 +83,7 @@ struct parmarea {
extern int noexec_disabled;
extern int memory_end_set;
extern unsigned long memory_end;
extern unsigned long vmalloc_size;
extern unsigned long max_physmem_end;
extern unsigned long __swsusp_reset_dma;

View File

@ -71,11 +71,16 @@ extern void *__memmove(void *dest, const void *src, size_t n);
#define memcpy(dst, src, len) __memcpy(dst, src, len)
#define memmove(dst, src, len) __memmove(dst, src, len)
#define memset(s, c, n) __memset(s, c, n)
#define strlen(s) __strlen(s)
#define __no_sanitize_prefix_strfunc(x) __##x
#ifndef __NO_FORTIFY
#define __NO_FORTIFY /* FORTIFY_SOURCE uses __builtin_memcpy, etc. */
#endif
#else
#define __no_sanitize_prefix_strfunc(x) x
#endif /* defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__) */
void *__memset16(uint16_t *s, uint16_t v, size_t count);
@ -163,8 +168,8 @@ static inline char *strcpy(char *dst, const char *src)
}
#endif
#ifdef __HAVE_ARCH_STRLEN
static inline size_t strlen(const char *s)
#if defined(__HAVE_ARCH_STRLEN) || (defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__))
static inline size_t __no_sanitize_prefix_strfunc(strlen)(const char *s)
{
register unsigned long r0 asm("0") = 0;
const char *tmp = s;

View File

@ -2,7 +2,7 @@
/*
* Userspace interface to the pkey device driver
*
* Copyright IBM Corp. 2017
* Copyright IBM Corp. 2017, 2019
*
* Author: Harald Freudenberger <freude@de.ibm.com>
*
@ -20,38 +20,74 @@
#define PKEY_IOCTL_MAGIC 'p'
#define SECKEYBLOBSIZE 64 /* secure key blob size is always 64 bytes */
#define PROTKEYBLOBSIZE 80 /* protected key blob size is always 80 bytes */
#define MAXPROTKEYSIZE 64 /* a protected key blob may be up to 64 bytes */
#define MAXCLRKEYSIZE 32 /* a clear key value may be up to 32 bytes */
#define SECKEYBLOBSIZE 64 /* secure key blob size is always 64 bytes */
#define PROTKEYBLOBSIZE 80 /* protected key blob size is always 80 bytes */
#define MAXPROTKEYSIZE 64 /* a protected key blob may be up to 64 bytes */
#define MAXCLRKEYSIZE 32 /* a clear key value may be up to 32 bytes */
#define MAXAESCIPHERKEYSIZE 136 /* our aes cipher keys have always 136 bytes */
#define MINKEYBLOBSIZE SECKEYBLOBSIZE /* Minimum size of a key blob */
#define MAXKEYBLOBSIZE PROTKEYBLOBSIZE /* Maximum size of a key blob */
/* Minimum and maximum size of a key blob */
#define MINKEYBLOBSIZE SECKEYBLOBSIZE
#define MAXKEYBLOBSIZE MAXAESCIPHERKEYSIZE
/* defines for the type field within the pkey_protkey struct */
#define PKEY_KEYTYPE_AES_128 1
#define PKEY_KEYTYPE_AES_192 2
#define PKEY_KEYTYPE_AES_256 3
#define PKEY_KEYTYPE_AES_128 1
#define PKEY_KEYTYPE_AES_192 2
#define PKEY_KEYTYPE_AES_256 3
/* Struct to hold a secure key blob */
/* the newer ioctls use a pkey_key_type enum for type information */
enum pkey_key_type {
PKEY_TYPE_CCA_DATA = (__u32) 1,
PKEY_TYPE_CCA_CIPHER = (__u32) 2,
};
/* the newer ioctls use a pkey_key_size enum for key size information */
enum pkey_key_size {
PKEY_SIZE_AES_128 = (__u32) 128,
PKEY_SIZE_AES_192 = (__u32) 192,
PKEY_SIZE_AES_256 = (__u32) 256,
PKEY_SIZE_UNKNOWN = (__u32) 0xFFFFFFFF,
};
/* some of the newer ioctls use these flags */
#define PKEY_FLAGS_MATCH_CUR_MKVP 0x00000002
#define PKEY_FLAGS_MATCH_ALT_MKVP 0x00000004
/* keygenflags defines for CCA AES cipher keys */
#define PKEY_KEYGEN_XPRT_SYM 0x00008000
#define PKEY_KEYGEN_XPRT_UASY 0x00004000
#define PKEY_KEYGEN_XPRT_AASY 0x00002000
#define PKEY_KEYGEN_XPRT_RAW 0x00001000
#define PKEY_KEYGEN_XPRT_CPAC 0x00000800
#define PKEY_KEYGEN_XPRT_DES 0x00000080
#define PKEY_KEYGEN_XPRT_AES 0x00000040
#define PKEY_KEYGEN_XPRT_RSA 0x00000008
/* Struct to hold apqn target info (card/domain pair) */
struct pkey_apqn {
__u16 card;
__u16 domain;
};
/* Struct to hold a CCA AES secure key blob */
struct pkey_seckey {
__u8 seckey[SECKEYBLOBSIZE]; /* the secure key blob */
};
/* Struct to hold protected key and length info */
struct pkey_protkey {
__u32 type; /* key type, one of the PKEY_KEYTYPE values */
__u32 type; /* key type, one of the PKEY_KEYTYPE_AES values */
__u32 len; /* bytes actually stored in protkey[] */
__u8 protkey[MAXPROTKEYSIZE]; /* the protected key blob */
};
/* Struct to hold a clear key value */
/* Struct to hold an AES clear key value */
struct pkey_clrkey {
__u8 clrkey[MAXCLRKEYSIZE]; /* 16, 24, or 32 byte clear key value */
};
/*
* Generate secure key
* Generate CCA AES secure key.
*/
struct pkey_genseck {
__u16 cardnr; /* in: card to use or FFFF for any */
@ -62,7 +98,7 @@ struct pkey_genseck {
#define PKEY_GENSECK _IOWR(PKEY_IOCTL_MAGIC, 0x01, struct pkey_genseck)
/*
* Construct secure key from clear key value
* Construct CCA AES secure key from clear key value
*/
struct pkey_clr2seck {
__u16 cardnr; /* in: card to use or FFFF for any */
@ -74,7 +110,7 @@ struct pkey_clr2seck {
#define PKEY_CLR2SECK _IOWR(PKEY_IOCTL_MAGIC, 0x02, struct pkey_clr2seck)
/*
* Fabricate protected key from a secure key
* Fabricate AES protected key from a CCA AES secure key
*/
struct pkey_sec2protk {
__u16 cardnr; /* in: card to use or FFFF for any */
@ -85,7 +121,7 @@ struct pkey_sec2protk {
#define PKEY_SEC2PROTK _IOWR(PKEY_IOCTL_MAGIC, 0x03, struct pkey_sec2protk)
/*
* Fabricate protected key from an clear key value
* Fabricate AES protected key from clear key value
*/
struct pkey_clr2protk {
__u32 keytype; /* in: key type to generate */
@ -96,7 +132,7 @@ struct pkey_clr2protk {
/*
* Search for matching crypto card based on the Master Key
* Verification Pattern provided inside a secure key.
* Verification Pattern provided inside a CCA AES secure key.
*/
struct pkey_findcard {
struct pkey_seckey seckey; /* in: the secure key blob */
@ -115,7 +151,7 @@ struct pkey_skey2pkey {
#define PKEY_SKEY2PKEY _IOWR(PKEY_IOCTL_MAGIC, 0x06, struct pkey_skey2pkey)
/*
* Verify the given secure key for being able to be useable with
* Verify the given CCA AES secure key for being able to be useable with
* the pkey module. Check for correct key type and check for having at
* least one crypto card being able to handle this key (master key
* or old master key verification pattern matches).
@ -134,7 +170,7 @@ struct pkey_verifykey {
#define PKEY_VERIFY_ATTR_OLD_MKVP 0x00000100 /* key has old MKVP value */
/*
* Generate (AES) random protected key.
* Generate AES random protected key.
*/
struct pkey_genprotk {
__u32 keytype; /* in: key type to generate */
@ -144,7 +180,7 @@ struct pkey_genprotk {
#define PKEY_GENPROTK _IOWR(PKEY_IOCTL_MAGIC, 0x08, struct pkey_genprotk)
/*
* Verify an (AES) protected key.
* Verify an AES protected key.
*/
struct pkey_verifyprotk {
struct pkey_protkey protkey; /* in: the protected key to verify */
@ -160,7 +196,184 @@ struct pkey_kblob2pkey {
__u32 keylen; /* in: the key blob length */
struct pkey_protkey protkey; /* out: the protected key */
};
#define PKEY_KBLOB2PROTK _IOWR(PKEY_IOCTL_MAGIC, 0x0A, struct pkey_kblob2pkey)
/*
* Generate secure key, version 2.
* Generate either a CCA AES secure key or a CCA AES cipher key.
* There needs to be a list of apqns given with at least one entry in there.
* All apqns in the list need to be exact apqns, 0xFFFF as ANY card or domain
* is not supported. The implementation walks through the list of apqns and
* tries to send the request to each apqn without any further checking (like
* card type or online state). If the apqn fails, simple the next one in the
* list is tried until success (return 0) or the end of the list is reached
* (return -1 with errno ENODEV). You may use the PKEY_APQNS4KT ioctl to
* generate a list of apqns based on the key type to generate.
* The keygenflags argument is passed to the low level generation functions
* individual for the key type and has a key type specific meaning. Currently
* only CCA AES cipher keys react to this parameter: Use one or more of the
* PKEY_KEYGEN_* flags to widen the export possibilities. By default a cipher
* key is only exportable for CPACF (PKEY_KEYGEN_XPRT_CPAC).
*/
struct pkey_genseck2 {
struct pkey_apqn __user *apqns; /* in: ptr to list of apqn targets*/
__u32 apqn_entries; /* in: # of apqn target list entries */
enum pkey_key_type type; /* in: key type to generate */
enum pkey_key_size size; /* in: key size to generate */
__u32 keygenflags; /* in: key generation flags */
__u8 __user *key; /* in: pointer to key blob buffer */
__u32 keylen; /* in: available key blob buffer size */
/* out: actual key blob size */
};
#define PKEY_GENSECK2 _IOWR(PKEY_IOCTL_MAGIC, 0x11, struct pkey_genseck2)
/*
* Generate secure key from clear key value, version 2.
* Construct a CCA AES secure key or CCA AES cipher key from a given clear key
* value.
* There needs to be a list of apqns given with at least one entry in there.
* All apqns in the list need to be exact apqns, 0xFFFF as ANY card or domain
* is not supported. The implementation walks through the list of apqns and
* tries to send the request to each apqn without any further checking (like
* card type or online state). If the apqn fails, simple the next one in the
* list is tried until success (return 0) or the end of the list is reached
* (return -1 with errno ENODEV). You may use the PKEY_APQNS4KT ioctl to
* generate a list of apqns based on the key type to generate.
* The keygenflags argument is passed to the low level generation functions
* individual for the key type and has a key type specific meaning. Currently
* only CCA AES cipher keys react to this parameter: Use one or more of the
* PKEY_KEYGEN_* flags to widen the export possibilities. By default a cipher
* key is only exportable for CPACF (PKEY_KEYGEN_XPRT_CPAC).
*/
struct pkey_clr2seck2 {
struct pkey_apqn __user *apqns; /* in: ptr to list of apqn targets */
__u32 apqn_entries; /* in: # of apqn target list entries */
enum pkey_key_type type; /* in: key type to generate */
enum pkey_key_size size; /* in: key size to generate */
__u32 keygenflags; /* in: key generation flags */
struct pkey_clrkey clrkey; /* in: the clear key value */
__u8 __user *key; /* in: pointer to key blob buffer */
__u32 keylen; /* in: available key blob buffer size */
/* out: actual key blob size */
};
#define PKEY_CLR2SECK2 _IOWR(PKEY_IOCTL_MAGIC, 0x12, struct pkey_clr2seck2)
/*
* Verify the given secure key, version 2.
* Check for correct key type. If cardnr and domain are given (are not
* 0xFFFF) also check if this apqn is able to handle this type of key.
* If cardnr and/or domain is 0xFFFF, on return these values are filled
* with one apqn able to handle this key.
* The function also checks for the master key verification patterns
* of the key matching to the current or alternate mkvp of the apqn.
* Currently CCA AES secure keys and CCA AES cipher keys are supported.
* The flags field is updated with some additional info about the apqn mkvp
* match: If the current mkvp matches to the key's mkvp then the
* PKEY_FLAGS_MATCH_CUR_MKVP bit is set, if the alternate mkvp matches to
* the key's mkvp the PKEY_FLAGS_MATCH_ALT_MKVP is set. For CCA keys the
* alternate mkvp is the old master key verification pattern.
* CCA AES secure keys are also checked to have the CPACF export allowed
* bit enabled (XPRTCPAC) in the kmf1 field.
* The ioctl returns 0 as long as the given or found apqn matches to
* matches with the current or alternate mkvp to the key's mkvp. If the given
* apqn does not match or there is no such apqn found, -1 with errno
* ENODEV is returned.
*/
struct pkey_verifykey2 {
__u8 __user *key; /* in: pointer to key blob */
__u32 keylen; /* in: key blob size */
__u16 cardnr; /* in/out: card number */
__u16 domain; /* in/out: domain number */
enum pkey_key_type type; /* out: the key type */
enum pkey_key_size size; /* out: the key size */
__u32 flags; /* out: additional key info flags */
};
#define PKEY_VERIFYKEY2 _IOWR(PKEY_IOCTL_MAGIC, 0x17, struct pkey_verifykey2)
/*
* Transform a key blob (of any type) into a protected key, version 2.
* There needs to be a list of apqns given with at least one entry in there.
* All apqns in the list need to be exact apqns, 0xFFFF as ANY card or domain
* is not supported. The implementation walks through the list of apqns and
* tries to send the request to each apqn without any further checking (like
* card type or online state). If the apqn fails, simple the next one in the
* list is tried until success (return 0) or the end of the list is reached
* (return -1 with errno ENODEV). You may use the PKEY_APQNS4K ioctl to
* generate a list of apqns based on the key.
*/
struct pkey_kblob2pkey2 {
__u8 __user *key; /* in: pointer to key blob */
__u32 keylen; /* in: key blob size */
struct pkey_apqn __user *apqns; /* in: ptr to list of apqn targets */
__u32 apqn_entries; /* in: # of apqn target list entries */
struct pkey_protkey protkey; /* out: the protected key */
};
#define PKEY_KBLOB2PROTK2 _IOWR(PKEY_IOCTL_MAGIC, 0x1A, struct pkey_kblob2pkey2)
/*
* Build a list of APQNs based on a key blob given.
* Is able to find out which type of secure key is given (CCA AES secure
* key or CCA AES cipher key) and tries to find all matching crypto cards
* based on the MKVP and maybe other criterias (like CCA AES cipher keys
* need a CEX5C or higher). The list of APQNs is further filtered by the key's
* mkvp which needs to match to either the current mkvp or the alternate mkvp
* (which is the old mkvp on CCA adapters) of the apqns. The flags argument may
* be used to limit the matching apqns. If the PKEY_FLAGS_MATCH_CUR_MKVP is
* given, only the current mkvp of each apqn is compared. Likewise with the
* PKEY_FLAGS_MATCH_ALT_MKVP. If both are given, it is assumed to
* return apqns where either the current or the alternate mkvp
* matches. At least one of the matching flags needs to be given.
* The list of matching apqns is stored into the space given by the apqns
* argument and the number of stored entries goes into apqn_entries. If the list
* is empty (apqn_entries is 0) the apqn_entries field is updated to the number
* of apqn targets found and the ioctl returns with 0. If apqn_entries is > 0
* but the number of apqn targets does not fit into the list, the apqn_targets
* field is updatedd with the number of reqired entries but there are no apqn
* values stored in the list and the ioctl returns with ENOSPC. If no matching
* APQN is found, the ioctl returns with 0 but the apqn_entries value is 0.
*/
struct pkey_apqns4key {
__u8 __user *key; /* in: pointer to key blob */
__u32 keylen; /* in: key blob size */
__u32 flags; /* in: match controlling flags */
struct pkey_apqn __user *apqns; /* in/out: ptr to list of apqn targets*/
__u32 apqn_entries; /* in: max # of apqn entries in the list */
/* out: # apqns stored into the list */
};
#define PKEY_APQNS4K _IOWR(PKEY_IOCTL_MAGIC, 0x1B, struct pkey_apqns4key)
/*
* Build a list of APQNs based on a key type given.
* Build a list of APQNs based on a given key type and maybe further
* restrict the list by given master key verification patterns.
* For different key types there may be different ways to match the
* master key verification patterns. For CCA keys (CCA data key and CCA
* cipher key) the first 8 bytes of cur_mkvp refer to the current mkvp value
* of the apqn and the first 8 bytes of the alt_mkvp refer to the old mkvp.
* The flags argument controls if the apqns current and/or alternate mkvp
* should match. If the PKEY_FLAGS_MATCH_CUR_MKVP is given, only the current
* mkvp of each apqn is compared. Likewise with the PKEY_FLAGS_MATCH_ALT_MKVP.
* If both are given, it is assumed to return apqns where either the
* current or the alternate mkvp matches. If no match flag is given
* (flags is 0) the mkvp values are ignored for the match process.
* The list of matching apqns is stored into the space given by the apqns
* argument and the number of stored entries goes into apqn_entries. If the list
* is empty (apqn_entries is 0) the apqn_entries field is updated to the number
* of apqn targets found and the ioctl returns with 0. If apqn_entries is > 0
* but the number of apqn targets does not fit into the list, the apqn_targets
* field is updatedd with the number of reqired entries but there are no apqn
* values stored in the list and the ioctl returns with ENOSPC. If no matching
* APQN is found, the ioctl returns with 0 but the apqn_entries value is 0.
*/
struct pkey_apqns4keytype {
enum pkey_key_type type; /* in: key type */
__u8 cur_mkvp[32]; /* in: current mkvp */
__u8 alt_mkvp[32]; /* in: alternate mkvp */
__u32 flags; /* in: match controlling flags */
struct pkey_apqn __user *apqns; /* in/out: ptr to list of apqn targets*/
__u32 apqn_entries; /* in: max # of apqn entries in the list */
/* out: # apqns stored into the list */
};
#define PKEY_APQNS4KT _IOWR(PKEY_IOCTL_MAGIC, 0x1C, struct pkey_apqns4keytype)
#endif /* _UAPI_PKEY_H */

View File

@ -10,20 +10,12 @@ CFLAGS_REMOVE_ftrace.o = $(CC_FLAGS_FTRACE)
# Do not trace early setup code
CFLAGS_REMOVE_early.o = $(CC_FLAGS_FTRACE)
CFLAGS_REMOVE_early_nobss.o = $(CC_FLAGS_FTRACE)
endif
GCOV_PROFILE_early.o := n
GCOV_PROFILE_early_nobss.o := n
KCOV_INSTRUMENT_early.o := n
KCOV_INSTRUMENT_early_nobss.o := n
UBSAN_SANITIZE_early.o := n
UBSAN_SANITIZE_early_nobss.o := n
KASAN_SANITIZE_early_nobss.o := n
KASAN_SANITIZE_ipl.o := n
KASAN_SANITIZE_machine_kexec.o := n
@ -48,7 +40,7 @@ CFLAGS_ptrace.o += -DUTS_MACHINE='"$(UTS_MACHINE)"'
obj-y := traps.o time.o process.o base.o early.o setup.o idle.o vtime.o
obj-y += processor.o sys_s390.o ptrace.o signal.o cpcmd.o ebcdic.o nmi.o
obj-y += debug.o irq.o ipl.o dis.o diag.o vdso.o early_nobss.o
obj-y += debug.o irq.o ipl.o dis.o diag.o vdso.o
obj-y += sysinfo.o lgr.o os_info.o machine_kexec.o pgm_check.o
obj-y += runtime_instr.o cache.o fpu.o dumpstack.o guarded_storage.o sthyi.o
obj-y += entry.o reipl.o relocate_kernel.o kdebugfs.o alternative.o
@ -90,6 +82,3 @@ obj-$(CONFIG_TRACEPOINTS) += trace.o
# vdso
obj-y += vdso64/
obj-$(CONFIG_COMPAT_VDSO) += vdso32/
chkbss := head64.o early_nobss.o
include $(srctree)/arch/s390/scripts/Makefile.chkbss

View File

@ -16,27 +16,6 @@
GEN_BR_THUNK %r9
GEN_BR_THUNK %r14
ENTRY(s390_base_mcck_handler)
basr %r13,0
0: lg %r15,__LC_NODAT_STACK # load panic stack
aghi %r15,-STACK_FRAME_OVERHEAD
larl %r1,s390_base_mcck_handler_fn
lg %r9,0(%r1)
ltgr %r9,%r9
jz 1f
BASR_EX %r14,%r9
1: la %r1,4095
lmg %r0,%r15,__LC_GPREGS_SAVE_AREA-4095(%r1)
lpswe __LC_MCK_OLD_PSW
ENDPROC(s390_base_mcck_handler)
.section .bss
.align 8
.globl s390_base_mcck_handler_fn
s390_base_mcck_handler_fn:
.quad 0
.previous
ENTRY(s390_base_ext_handler)
stmg %r0,%r15,__LC_SAVE_AREA_ASYNC
basr %r13,0

View File

@ -32,6 +32,21 @@
#include <asm/boot_data.h>
#include "entry.h"
static void __init reset_tod_clock(void)
{
u64 time;
if (store_tod_clock(&time) == 0)
return;
/* TOD clock not running. Set the clock to Unix Epoch. */
if (set_tod_clock(TOD_UNIX_EPOCH) != 0 || store_tod_clock(&time) != 0)
disabled_wait();
memset(tod_clock_base, 0, 16);
*(__u64 *) &tod_clock_base[1] = TOD_UNIX_EPOCH;
S390_lowcore.last_update_clock = TOD_UNIX_EPOCH;
}
/*
* Initialize storage key for kernel pages
*/
@ -301,6 +316,7 @@ static void __init check_image_bootable(void)
void __init startup_init(void)
{
reset_tod_clock();
check_image_bootable();
time_early_init();
init_kernel_storage_key();

View File

@ -1,45 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright IBM Corp. 2007, 2018
*/
/*
* Early setup functions which may not rely on an initialized bss
* section. The last thing that is supposed to happen here is
* initialization of the bss section.
*/
#include <linux/processor.h>
#include <linux/string.h>
#include <asm/sections.h>
#include <asm/lowcore.h>
#include <asm/timex.h>
#include <asm/kasan.h>
#include "entry.h"
static void __init reset_tod_clock(void)
{
u64 time;
if (store_tod_clock(&time) == 0)
return;
/* TOD clock not running. Set the clock to Unix Epoch. */
if (set_tod_clock(TOD_UNIX_EPOCH) != 0 || store_tod_clock(&time) != 0)
disabled_wait();
memset(tod_clock_base, 0, 16);
*(__u64 *) &tod_clock_base[1] = TOD_UNIX_EPOCH;
S390_lowcore.last_update_clock = TOD_UNIX_EPOCH;
}
static void __init clear_bss_section(void)
{
memset(__bss_start, 0, __bss_stop - __bss_start);
}
void __init startup_init_nobss(void)
{
reset_tod_clock();
clear_bss_section();
kasan_early_init();
}

View File

@ -25,7 +25,7 @@ static int __init setup_early_printk(char *buf)
if (early_console)
return 0;
/* Accept only "earlyprintk" and "earlyprintk=sclp" */
if (buf && strncmp(buf, "sclp", 4))
if (buf && !str_has_prefix(buf, "sclp"))
return 0;
if (!sclp.has_linemode && !sclp.has_vt220)
return 0;

View File

@ -34,11 +34,9 @@ ENTRY(startup_continue)
larl %r14,init_task
stg %r14,__LC_CURRENT
larl %r15,init_thread_union+THREAD_SIZE-STACK_FRAME_OVERHEAD
#
# Early setup functions that may not rely on an initialized bss section,
# like moving the initrd. Returns with an initialized bss section.
#
brasl %r14,startup_init_nobss
#ifdef CONFIG_KASAN
brasl %r14,kasan_early_init
#endif
#
# Early machine initialization and detection functions.
#

View File

@ -472,11 +472,11 @@ int module_finalize(const Elf_Ehdr *hdr,
apply_alternatives(aseg, aseg + s->sh_size);
if (IS_ENABLED(CONFIG_EXPOLINE) &&
(!strncmp(".s390_indirect", secname, 14)))
(str_has_prefix(secname, ".s390_indirect")))
nospec_revert(aseg, aseg + s->sh_size);
if (IS_ENABLED(CONFIG_EXPOLINE) &&
(!strncmp(".s390_return", secname, 12)))
(str_has_prefix(secname, ".s390_return")))
nospec_revert(aseg, aseg + s->sh_size);
}

View File

@ -514,7 +514,6 @@ static void extend_sampling_buffer(struct sf_buffer *sfb,
sfb_pending_allocs(sfb, hwc));
}
/* Number of perf events counting hardware events */
static atomic_t num_events;
/* Used to avoid races in calling reserve/release_cpumf_hardware */
@ -923,9 +922,10 @@ static void cpumsf_pmu_enable(struct pmu *pmu)
lpp(&S390_lowcore.lpp);
debug_sprintf_event(sfdbg, 6, "pmu_enable: es=%i cs=%i ed=%i cd=%i "
"tear=%p dear=%p\n", cpuhw->lsctl.es, cpuhw->lsctl.cs,
cpuhw->lsctl.ed, cpuhw->lsctl.cd,
(void *) cpuhw->lsctl.tear, (void *) cpuhw->lsctl.dear);
"tear=%p dear=%p\n", cpuhw->lsctl.es,
cpuhw->lsctl.cs, cpuhw->lsctl.ed, cpuhw->lsctl.cd,
(void *) cpuhw->lsctl.tear,
(void *) cpuhw->lsctl.dear);
}
static void cpumsf_pmu_disable(struct pmu *pmu)
@ -1083,7 +1083,8 @@ static void debug_sample_entry(struct hws_basic_entry *sample,
struct hws_trailer_entry *te)
{
debug_sprintf_event(sfdbg, 4, "hw_collect_samples: Found unknown "
"sampling data entry: te->f=%i basic.def=%04x (%p)\n",
"sampling data entry: te->f=%i basic.def=%04x "
"(%p)\n",
te->f, sample->def, sample);
}
@ -1216,7 +1217,7 @@ static void hw_perf_event_update(struct perf_event *event, int flush_all)
/* Timestamps are valid for full sample-data-blocks only */
debug_sprintf_event(sfdbg, 6, "hw_perf_event_update: sdbt=%p "
"overflow=%llu timestamp=0x%llx\n",
"overflow=%llu timestamp=%#llx\n",
sdbt, te->overflow,
(te->f) ? trailer_timestamp(te) : 0ULL);
@ -1879,10 +1880,12 @@ static struct attribute_group cpumsf_pmu_events_group = {
.name = "events",
.attrs = cpumsf_pmu_events_attr,
};
static struct attribute_group cpumsf_pmu_format_group = {
.name = "format",
.attrs = cpumsf_pmu_format_attr,
};
static const struct attribute_group *cpumsf_pmu_attr_groups[] = {
&cpumsf_pmu_events_group,
&cpumsf_pmu_format_group,
@ -1938,7 +1941,8 @@ static void cpumf_measurement_alert(struct ext_code ext_code,
/* Report measurement alerts only for non-PRA codes */
if (alert != CPU_MF_INT_SF_PRA)
debug_sprintf_event(sfdbg, 6, "measurement alert: 0x%x\n", alert);
debug_sprintf_event(sfdbg, 6, "measurement alert: %#x\n",
alert);
/* Sampling authorization change request */
if (alert & CPU_MF_INT_SF_SACA)
@ -1959,6 +1963,7 @@ static void cpumf_measurement_alert(struct ext_code ext_code,
sf_disable();
}
}
static int cpusf_pmu_setup(unsigned int cpu, int flags)
{
/* Ignore the notification if no events are scheduled on the PMU.
@ -2096,5 +2101,6 @@ static int __init init_cpum_sampling_pmu(void)
out:
return err;
}
arch_initcall(init_cpum_sampling_pmu);
core_param(cpum_sfb_size, CPUM_SF_MAX_SDB, sfb_size, 0640);

View File

@ -184,20 +184,30 @@ unsigned long get_wchan(struct task_struct *p)
if (!p || p == current || p->state == TASK_RUNNING || !task_stack_page(p))
return 0;
if (!try_get_task_stack(p))
return 0;
low = task_stack_page(p);
high = (struct stack_frame *) task_pt_regs(p);
sf = (struct stack_frame *) p->thread.ksp;
if (sf <= low || sf > high)
return 0;
for (count = 0; count < 16; count++) {
sf = (struct stack_frame *) sf->back_chain;
if (sf <= low || sf > high)
return 0;
return_address = sf->gprs[8];
if (!in_sched_functions(return_address))
return return_address;
if (sf <= low || sf > high) {
return_address = 0;
goto out;
}
return 0;
for (count = 0; count < 16; count++) {
sf = (struct stack_frame *)READ_ONCE_NOCHECK(sf->back_chain);
if (sf <= low || sf > high) {
return_address = 0;
goto out;
}
return_address = READ_ONCE_NOCHECK(sf->gprs[8]);
if (!in_sched_functions(return_address))
goto out;
}
out:
put_task_stack(p);
return return_address;
}
unsigned long arch_align_stack(unsigned long sp)

View File

@ -99,6 +99,7 @@ int __bootdata_preserved(prot_virt_guest);
int __bootdata(noexec_disabled);
int __bootdata(memory_end_set);
unsigned long __bootdata(memory_end);
unsigned long __bootdata(vmalloc_size);
unsigned long __bootdata(max_physmem_end);
struct mem_detect_info __bootdata(mem_detect);
@ -168,15 +169,15 @@ static void __init set_preferred_console(void)
static int __init conmode_setup(char *str)
{
#if defined(CONFIG_SCLP_CONSOLE) || defined(CONFIG_SCLP_VT220_CONSOLE)
if (strncmp(str, "hwc", 4) == 0 || strncmp(str, "sclp", 5) == 0)
if (!strcmp(str, "hwc") || !strcmp(str, "sclp"))
SET_CONSOLE_SCLP;
#endif
#if defined(CONFIG_TN3215_CONSOLE)
if (strncmp(str, "3215", 5) == 0)
if (!strcmp(str, "3215"))
SET_CONSOLE_3215;
#endif
#if defined(CONFIG_TN3270_CONSOLE)
if (strncmp(str, "3270", 5) == 0)
if (!strcmp(str, "3270"))
SET_CONSOLE_3270;
#endif
set_preferred_console();
@ -211,7 +212,7 @@ static void __init conmode_default(void)
#endif
return;
}
if (strncmp(ptr + 8, "3270", 4) == 0) {
if (str_has_prefix(ptr + 8, "3270")) {
#if defined(CONFIG_TN3270_CONSOLE)
SET_CONSOLE_3270;
#elif defined(CONFIG_TN3215_CONSOLE)
@ -219,7 +220,7 @@ static void __init conmode_default(void)
#elif defined(CONFIG_SCLP_CONSOLE) || defined(CONFIG_SCLP_VT220_CONSOLE)
SET_CONSOLE_SCLP;
#endif
} else if (strncmp(ptr + 8, "3215", 4) == 0) {
} else if (str_has_prefix(ptr + 8, "3215")) {
#if defined(CONFIG_TN3215_CONSOLE)
SET_CONSOLE_3215;
#elif defined(CONFIG_TN3270_CONSOLE)
@ -302,15 +303,6 @@ void machine_power_off(void)
void (*pm_power_off)(void) = machine_power_off;
EXPORT_SYMBOL_GPL(pm_power_off);
static int __init parse_vmalloc(char *arg)
{
if (!arg)
return -EINVAL;
VMALLOC_END = (memparse(arg, &arg) + PAGE_SIZE - 1) & PAGE_MASK;
return 0;
}
early_param("vmalloc", parse_vmalloc);
void *restart_stack __section(.data);
unsigned long stack_alloc(void)
@ -563,10 +555,9 @@ static void __init setup_resources(void)
static void __init setup_memory_end(void)
{
unsigned long vmax, vmalloc_size, tmp;
unsigned long vmax, tmp;
/* Choose kernel address space layout: 3 or 4 levels. */
vmalloc_size = VMALLOC_END ?: (128UL << 30) - MODULES_LEN;
if (IS_ENABLED(CONFIG_KASAN)) {
vmax = IS_ENABLED(CONFIG_KASAN_S390_4_LEVEL_PAGING)
? _REGION1_SIZE
@ -990,6 +981,10 @@ static int __init setup_hwcaps(void)
case 0x3907:
strcpy(elf_platform, "z14");
break;
case 0x8561:
case 0x8562:
strcpy(elf_platform, "z15");
break;
}
/*

View File

@ -6,57 +6,19 @@
* Author(s): Heiko Carstens <heiko.carstens@de.ibm.com>
*/
#include <linux/sched.h>
#include <linux/sched/debug.h>
#include <linux/stacktrace.h>
#include <linux/kallsyms.h>
#include <linux/export.h>
#include <asm/stacktrace.h>
#include <asm/unwind.h>
void save_stack_trace(struct stack_trace *trace)
void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
struct task_struct *task, struct pt_regs *regs)
{
struct unwind_state state;
unsigned long addr;
unwind_for_each_frame(&state, current, NULL, 0) {
if (trace->nr_entries >= trace->max_entries)
unwind_for_each_frame(&state, task, regs, 0) {
addr = unwind_get_return_address(&state);
if (!addr || !consume_entry(cookie, addr, false))
break;
if (trace->skip > 0)
trace->skip--;
else
trace->entries[trace->nr_entries++] = state.ip;
}
}
EXPORT_SYMBOL_GPL(save_stack_trace);
void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
{
struct unwind_state state;
unwind_for_each_frame(&state, tsk, NULL, 0) {
if (trace->nr_entries >= trace->max_entries)
break;
if (in_sched_functions(state.ip))
continue;
if (trace->skip > 0)
trace->skip--;
else
trace->entries[trace->nr_entries++] = state.ip;
}
}
EXPORT_SYMBOL_GPL(save_stack_trace_tsk);
void save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
{
struct unwind_state state;
unwind_for_each_frame(&state, current, regs, 0) {
if (trace->nr_entries >= trace->max_entries)
break;
if (trace->skip > 0)
trace->skip--;
else
trace->entries[trace->nr_entries++] = state.ip;
}
}
EXPORT_SYMBOL_GPL(save_stack_trace_regs);

View File

@ -97,21 +97,13 @@ static const struct vm_special_mapping vdso_mapping = {
.mremap = vdso_mremap,
};
static int __init vdso_setup(char *s)
static int __init vdso_setup(char *str)
{
unsigned long val;
int rc;
bool enabled;
rc = 0;
if (strncmp(s, "on", 3) == 0)
vdso_enabled = 1;
else if (strncmp(s, "off", 4) == 0)
vdso_enabled = 0;
else {
rc = kstrtoul(s, 0, &val);
vdso_enabled = rc ? 0 : !!val;
}
return !rc;
if (!kstrtobool(str, &enabled))
vdso_enabled = enabled;
return 1;
}
__setup("vdso=", vdso_setup);

View File

@ -11,6 +11,3 @@ lib-$(CONFIG_UPROBES) += probes.o
# Instrumenting memory accesses to __user data (in different address space)
# produce false positives
KASAN_SANITIZE_uaccess.o := n
chkbss := mem.o
include $(srctree)/arch/s390/scripts/Makefile.chkbss

View File

@ -19,6 +19,7 @@
#include <linux/memblock.h>
#include <linux/ctype.h>
#include <linux/ioport.h>
#include <linux/refcount.h>
#include <asm/diag.h>
#include <asm/page.h>
#include <asm/pgtable.h>
@ -64,7 +65,7 @@ struct dcss_segment {
char res_name[16];
unsigned long start_addr;
unsigned long end;
atomic_t ref_count;
refcount_t ref_count;
int do_nonshared;
unsigned int vm_segtype;
struct qrange range[6];
@ -362,7 +363,7 @@ __segment_load (char *name, int do_nonshared, unsigned long *addr, unsigned long
seg->start_addr = start_addr;
seg->end = end_addr;
seg->do_nonshared = do_nonshared;
atomic_set(&seg->ref_count, 1);
refcount_set(&seg->ref_count, 1);
list_add(&seg->list, &dcss_list);
*addr = seg->start_addr;
*end = seg->end;
@ -422,7 +423,7 @@ segment_load (char *name, int do_nonshared, unsigned long *addr,
rc = __segment_load (name, do_nonshared, addr, end);
else {
if (do_nonshared == seg->do_nonshared) {
atomic_inc(&seg->ref_count);
refcount_inc(&seg->ref_count);
*addr = seg->start_addr;
*end = seg->end;
rc = seg->vm_segtype;
@ -468,7 +469,7 @@ segment_modify_shared (char *name, int do_nonshared)
rc = 0;
goto out_unlock;
}
if (atomic_read (&seg->ref_count) != 1) {
if (refcount_read(&seg->ref_count) != 1) {
pr_warn("DCSS %s is in use and cannot be reloaded\n", name);
rc = -EAGAIN;
goto out_unlock;
@ -544,7 +545,7 @@ segment_unload(char *name)
pr_err("Unloading unknown DCSS %s failed\n", name);
goto out_unlock;
}
if (atomic_dec_return(&seg->ref_count) != 0)
if (!refcount_dec_and_test(&seg->ref_count))
goto out_unlock;
release_resource(seg->res);
kfree(seg->res);

View File

@ -67,7 +67,7 @@ static struct gmap *gmap_alloc(unsigned long limit)
INIT_RADIX_TREE(&gmap->host_to_rmap, GFP_ATOMIC);
spin_lock_init(&gmap->guest_table_lock);
spin_lock_init(&gmap->shadow_lock);
atomic_set(&gmap->ref_count, 1);
refcount_set(&gmap->ref_count, 1);
page = alloc_pages(GFP_KERNEL, CRST_ALLOC_ORDER);
if (!page)
goto out_free;
@ -214,7 +214,7 @@ static void gmap_free(struct gmap *gmap)
*/
struct gmap *gmap_get(struct gmap *gmap)
{
atomic_inc(&gmap->ref_count);
refcount_inc(&gmap->ref_count);
return gmap;
}
EXPORT_SYMBOL_GPL(gmap_get);
@ -227,7 +227,7 @@ EXPORT_SYMBOL_GPL(gmap_get);
*/
void gmap_put(struct gmap *gmap)
{
if (atomic_dec_return(&gmap->ref_count) == 0)
if (refcount_dec_and_test(&gmap->ref_count))
gmap_free(gmap);
}
EXPORT_SYMBOL_GPL(gmap_put);
@ -1594,7 +1594,7 @@ static struct gmap *gmap_find_shadow(struct gmap *parent, unsigned long asce,
continue;
if (!sg->initialized)
return ERR_PTR(-EAGAIN);
atomic_inc(&sg->ref_count);
refcount_inc(&sg->ref_count);
return sg;
}
return NULL;
@ -1682,7 +1682,7 @@ struct gmap *gmap_shadow(struct gmap *parent, unsigned long asce,
}
}
}
atomic_set(&new->ref_count, 2);
refcount_set(&new->ref_count, 2);
list_add(&new->list, &parent->children);
if (asce & _ASCE_REAL_SPACE) {
/* nothing to protect, return right away */

View File

@ -236,18 +236,6 @@ static void __init kasan_early_detect_facilities(void)
}
}
static unsigned long __init get_mem_detect_end(void)
{
unsigned long start;
unsigned long end;
if (mem_detect.count) {
__get_mem_detect_block(mem_detect.count - 1, &start, &end);
return end;
}
return 0;
}
void __init kasan_early_init(void)
{
unsigned long untracked_mem_end;
@ -273,6 +261,8 @@ void __init kasan_early_init(void)
/* respect mem= cmdline parameter */
if (memory_end_set && memsize > memory_end)
memsize = memory_end;
if (IS_ENABLED(CONFIG_CRASH_DUMP) && OLDMEM_BASE)
memsize = min(memsize, OLDMEM_SIZE);
memsize = min(memsize, KASAN_SHADOW_START);
if (IS_ENABLED(CONFIG_KASAN_S390_4_LEVEL_PAGING)) {

View File

@ -21,17 +21,11 @@ static int cmma_flag = 1;
static int __init cmma(char *str)
{
char *parm;
bool enabled;
parm = strstrip(str);
if (strcmp(parm, "yes") == 0 || strcmp(parm, "on") == 0) {
cmma_flag = 1;
return 1;
}
cmma_flag = 0;
if (strcmp(parm, "no") == 0 || strcmp(parm, "off") == 0)
return 1;
return 0;
if (!kstrtobool(str, &enabled))
cmma_flag = enabled;
return 1;
}
__setup("cmma=", cmma);

View File

@ -558,9 +558,7 @@ static int __init early_parse_emu_nodes(char *p)
{
int count;
if (kstrtoint(p, 0, &count) != 0 || count <= 0)
return 0;
if (count <= 0)
if (!p || kstrtoint(p, 0, &count) != 0 || count <= 0)
return 0;
emu_nodes = min(count, MAX_NUMNODES);
return 0;
@ -572,7 +570,8 @@ early_param("emu_nodes", early_parse_emu_nodes);
*/
static int __init early_parse_emu_size(char *p)
{
emu_size = memparse(p, NULL);
if (p)
emu_size = memparse(p, NULL);
return 0;
}
early_param("emu_size", early_parse_emu_size);

View File

@ -158,6 +158,8 @@ early_param("numa_debug", parse_debug);
static int __init parse_numa(char *parm)
{
if (!parm)
return 1;
if (strcmp(parm, numa_mode_plain.name) == 0)
mode = &numa_mode_plain;
#ifdef CONFIG_NUMA_EMU

View File

@ -431,13 +431,13 @@ static void zpci_map_resources(struct pci_dev *pdev)
}
#ifdef CONFIG_PCI_IOV
i = PCI_IOV_RESOURCES;
for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) {
int bar = i + PCI_IOV_RESOURCES;
for (; i < PCI_SRIOV_NUM_BARS + PCI_IOV_RESOURCES; i++) {
len = pci_resource_len(pdev, i);
len = pci_resource_len(pdev, bar);
if (!len)
continue;
pdev->resource[i].parent = &iov_res;
pdev->resource[bar].parent = &iov_res;
}
#endif
}

View File

@ -674,9 +674,9 @@ EXPORT_SYMBOL_GPL(s390_pci_dma_ops);
static int __init s390_iommu_setup(char *str)
{
if (!strncmp(str, "strict", 6))
if (!strcmp(str, "strict"))
s390_iommu_strict = 1;
return 0;
return 1;
}
__setup("s390_iommu=", s390_iommu_setup);

View File

@ -284,7 +284,7 @@ int arch_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type)
return rc;
irq_set_chip_and_handler(irq, &zpci_irq_chip,
handle_percpu_irq);
msg.data = hwirq;
msg.data = hwirq - bit;
if (irq_delivery == DIRECTED) {
msg.address_lo = zdev->msi_addr & 0xff0000ff;
msg.address_lo |= msi->affinity ?

View File

@ -57,6 +57,9 @@ static struct facility_def facility_defs[] = {
#endif
#ifdef CONFIG_HAVE_MARCH_Z14_FEATURES
58, /* miscellaneous-instruction-extension 2 */
#endif
#ifdef CONFIG_HAVE_MARCH_Z15_FEATURES
61, /* miscellaneous-instruction-extension 3 */
#endif
-1 /* END */
}

View File

@ -145,6 +145,26 @@ config CRYPTO_SHA512_S390
It is available as of z10.
config CRYPTO_SHA3_256_S390
tristate "SHA3_224 and SHA3_256 digest algorithm"
depends on S390
select CRYPTO_HASH
help
This is the s390 hardware accelerated implementation of the
SHA3_256 secure hash standard.
It is available as of z14.
config CRYPTO_SHA3_512_S390
tristate "SHA3_384 and SHA3_512 digest algorithm"
depends on S390
select CRYPTO_HASH
help
This is the s390 hardware accelerated implementation of the
SHA3_512 secure hash standard.
It is available as of z14.
config CRYPTO_DES_S390
tristate "DES and Triple DES cipher algorithms"
depends on S390

View File

@ -4,6 +4,3 @@
#
obj-y += cio/ block/ char/ crypto/ net/ scsi/ virtio/
drivers-y += drivers/s390/built-in.a

View File

@ -49,6 +49,3 @@ obj-$(CONFIG_CRASH_DUMP) += sclp_sdias.o zcore.o
hmcdrv-objs := hmcdrv_mod.o hmcdrv_dev.o hmcdrv_ftp.o hmcdrv_cache.o diag_ftp.o sclp_ftp.o
obj-$(CONFIG_HMC_DRV) += hmcdrv.o
chkbss := sclp_early_core.o
include $(srctree)/arch/s390/scripts/Makefile.chkbss

View File

@ -40,7 +40,7 @@ static void __init sclp_early_facilities_detect(struct read_info_sccb *sccb)
sclp.has_gisaf = !!(sccb->fac118 & 0x08);
sclp.has_hvs = !!(sccb->fac119 & 0x80);
sclp.has_kss = !!(sccb->fac98 & 0x01);
sclp.has_sipl = !!(sccb->cbl & 0x02);
sclp.has_sipl = !!(sccb->cbl & 0x4000);
if (sccb->fac85 & 0x02)
S390_lowcore.machine_flags |= MACHINE_FLAG_ESOP;
if (sccb->fac91 & 0x40)

View File

@ -43,6 +43,8 @@ static struct cma *vmcp_cma;
static int __init early_parse_vmcp_cma(char *p)
{
if (!p)
return 1;
vmcp_cma_size = ALIGN(memparse(p, NULL), PAGE_SIZE);
return 0;
}

View File

@ -27,6 +27,9 @@ struct workqueue_struct *vfio_ccw_work_q;
static struct kmem_cache *vfio_ccw_io_region;
static struct kmem_cache *vfio_ccw_cmd_region;
debug_info_t *vfio_ccw_debug_msg_id;
debug_info_t *vfio_ccw_debug_trace_id;
/*
* Helpers
*/
@ -164,6 +167,9 @@ static int vfio_ccw_sch_probe(struct subchannel *sch)
if (ret)
goto out_disable;
VFIO_CCW_MSG_EVENT(4, "bound to subchannel %x.%x.%04x\n",
sch->schid.cssid, sch->schid.ssid,
sch->schid.sch_no);
return 0;
out_disable:
@ -194,6 +200,9 @@ static int vfio_ccw_sch_remove(struct subchannel *sch)
kfree(private->cp.guest_cp);
kfree(private);
VFIO_CCW_MSG_EVENT(4, "unbound from subchannel %x.%x.%04x\n",
sch->schid.cssid, sch->schid.ssid,
sch->schid.sch_no);
return 0;
}
@ -263,27 +272,64 @@ static struct css_driver vfio_ccw_sch_driver = {
.sch_event = vfio_ccw_sch_event,
};
static int __init vfio_ccw_debug_init(void)
{
vfio_ccw_debug_msg_id = debug_register("vfio_ccw_msg", 16, 1,
11 * sizeof(long));
if (!vfio_ccw_debug_msg_id)
goto out_unregister;
debug_register_view(vfio_ccw_debug_msg_id, &debug_sprintf_view);
debug_set_level(vfio_ccw_debug_msg_id, 2);
vfio_ccw_debug_trace_id = debug_register("vfio_ccw_trace", 16, 1, 16);
if (!vfio_ccw_debug_trace_id)
goto out_unregister;
debug_register_view(vfio_ccw_debug_trace_id, &debug_hex_ascii_view);
debug_set_level(vfio_ccw_debug_trace_id, 2);
return 0;
out_unregister:
debug_unregister(vfio_ccw_debug_msg_id);
debug_unregister(vfio_ccw_debug_trace_id);
return -1;
}
static void vfio_ccw_debug_exit(void)
{
debug_unregister(vfio_ccw_debug_msg_id);
debug_unregister(vfio_ccw_debug_trace_id);
}
static int __init vfio_ccw_sch_init(void)
{
int ret = -ENOMEM;
int ret;
ret = vfio_ccw_debug_init();
if (ret)
return ret;
vfio_ccw_work_q = create_singlethread_workqueue("vfio-ccw");
if (!vfio_ccw_work_q)
return -ENOMEM;
if (!vfio_ccw_work_q) {
ret = -ENOMEM;
goto out_err;
}
vfio_ccw_io_region = kmem_cache_create_usercopy("vfio_ccw_io_region",
sizeof(struct ccw_io_region), 0,
SLAB_ACCOUNT, 0,
sizeof(struct ccw_io_region), NULL);
if (!vfio_ccw_io_region)
if (!vfio_ccw_io_region) {
ret = -ENOMEM;
goto out_err;
}
vfio_ccw_cmd_region = kmem_cache_create_usercopy("vfio_ccw_cmd_region",
sizeof(struct ccw_cmd_region), 0,
SLAB_ACCOUNT, 0,
sizeof(struct ccw_cmd_region), NULL);
if (!vfio_ccw_cmd_region)
if (!vfio_ccw_cmd_region) {
ret = -ENOMEM;
goto out_err;
}
isc_register(VFIO_CCW_ISC);
ret = css_driver_register(&vfio_ccw_sch_driver);
@ -298,6 +344,7 @@ out_err:
kmem_cache_destroy(vfio_ccw_cmd_region);
kmem_cache_destroy(vfio_ccw_io_region);
destroy_workqueue(vfio_ccw_work_q);
vfio_ccw_debug_exit();
return ret;
}
@ -308,6 +355,7 @@ static void __exit vfio_ccw_sch_exit(void)
kmem_cache_destroy(vfio_ccw_io_region);
kmem_cache_destroy(vfio_ccw_cmd_region);
destroy_workqueue(vfio_ccw_work_q);
vfio_ccw_debug_exit();
}
module_init(vfio_ccw_sch_init);
module_exit(vfio_ccw_sch_exit);

View File

@ -37,9 +37,14 @@ static int fsm_io_helper(struct vfio_ccw_private *private)
goto out;
}
VFIO_CCW_TRACE_EVENT(5, "stIO");
VFIO_CCW_TRACE_EVENT(5, dev_name(&sch->dev));
/* Issue "Start Subchannel" */
ccode = ssch(sch->schid, orb);
VFIO_CCW_HEX_EVENT(5, &ccode, sizeof(ccode));
switch (ccode) {
case 0:
/*
@ -86,9 +91,14 @@ static int fsm_do_halt(struct vfio_ccw_private *private)
spin_lock_irqsave(sch->lock, flags);
VFIO_CCW_TRACE_EVENT(2, "haltIO");
VFIO_CCW_TRACE_EVENT(2, dev_name(&sch->dev));
/* Issue "Halt Subchannel" */
ccode = hsch(sch->schid);
VFIO_CCW_HEX_EVENT(2, &ccode, sizeof(ccode));
switch (ccode) {
case 0:
/*
@ -122,9 +132,14 @@ static int fsm_do_clear(struct vfio_ccw_private *private)
spin_lock_irqsave(sch->lock, flags);
VFIO_CCW_TRACE_EVENT(2, "clearIO");
VFIO_CCW_TRACE_EVENT(2, dev_name(&sch->dev));
/* Issue "Clear Subchannel" */
ccode = csch(sch->schid);
VFIO_CCW_HEX_EVENT(2, &ccode, sizeof(ccode));
switch (ccode) {
case 0:
/*
@ -149,6 +164,9 @@ static void fsm_notoper(struct vfio_ccw_private *private,
{
struct subchannel *sch = private->sch;
VFIO_CCW_TRACE_EVENT(2, "notoper");
VFIO_CCW_TRACE_EVENT(2, dev_name(&sch->dev));
/*
* TODO:
* Probably we should send the machine check to the guest.
@ -229,6 +247,7 @@ static void fsm_io_request(struct vfio_ccw_private *private,
struct ccw_io_region *io_region = private->io_region;
struct mdev_device *mdev = private->mdev;
char *errstr = "request";
struct subchannel_id schid = get_schid(private);
private->state = VFIO_CCW_STATE_CP_PROCESSING;
memcpy(scsw, io_region->scsw_area, sizeof(*scsw));
@ -239,18 +258,32 @@ static void fsm_io_request(struct vfio_ccw_private *private,
/* Don't try to build a cp if transport mode is specified. */
if (orb->tm.b) {
io_region->ret_code = -EOPNOTSUPP;
VFIO_CCW_MSG_EVENT(2,
"%pUl (%x.%x.%04x): transport mode\n",
mdev_uuid(mdev), schid.cssid,
schid.ssid, schid.sch_no);
errstr = "transport mode";
goto err_out;
}
io_region->ret_code = cp_init(&private->cp, mdev_dev(mdev),
orb);
if (io_region->ret_code) {
VFIO_CCW_MSG_EVENT(2,
"%pUl (%x.%x.%04x): cp_init=%d\n",
mdev_uuid(mdev), schid.cssid,
schid.ssid, schid.sch_no,
io_region->ret_code);
errstr = "cp init";
goto err_out;
}
io_region->ret_code = cp_prefetch(&private->cp);
if (io_region->ret_code) {
VFIO_CCW_MSG_EVENT(2,
"%pUl (%x.%x.%04x): cp_prefetch=%d\n",
mdev_uuid(mdev), schid.cssid,
schid.ssid, schid.sch_no,
io_region->ret_code);
errstr = "cp prefetch";
cp_free(&private->cp);
goto err_out;
@ -259,23 +292,36 @@ static void fsm_io_request(struct vfio_ccw_private *private,
/* Start channel program and wait for I/O interrupt. */
io_region->ret_code = fsm_io_helper(private);
if (io_region->ret_code) {
VFIO_CCW_MSG_EVENT(2,
"%pUl (%x.%x.%04x): fsm_io_helper=%d\n",
mdev_uuid(mdev), schid.cssid,
schid.ssid, schid.sch_no,
io_region->ret_code);
errstr = "cp fsm_io_helper";
cp_free(&private->cp);
goto err_out;
}
return;
} else if (scsw->cmd.fctl & SCSW_FCTL_HALT_FUNC) {
VFIO_CCW_MSG_EVENT(2,
"%pUl (%x.%x.%04x): halt on io_region\n",
mdev_uuid(mdev), schid.cssid,
schid.ssid, schid.sch_no);
/* halt is handled via the async cmd region */
io_region->ret_code = -EOPNOTSUPP;
goto err_out;
} else if (scsw->cmd.fctl & SCSW_FCTL_CLEAR_FUNC) {
VFIO_CCW_MSG_EVENT(2,
"%pUl (%x.%x.%04x): clear on io_region\n",
mdev_uuid(mdev), schid.cssid,
schid.ssid, schid.sch_no);
/* clear is handled via the async cmd region */
io_region->ret_code = -EOPNOTSUPP;
goto err_out;
}
err_out:
trace_vfio_ccw_io_fctl(scsw->cmd.fctl, get_schid(private),
trace_vfio_ccw_io_fctl(scsw->cmd.fctl, schid,
io_region->ret_code, errstr);
}
@ -308,6 +354,9 @@ static void fsm_irq(struct vfio_ccw_private *private,
{
struct irb *irb = this_cpu_ptr(&cio_irb);
VFIO_CCW_TRACE_EVENT(6, "IRQ");
VFIO_CCW_TRACE_EVENT(6, dev_name(&private->sch->dev));
memcpy(&private->irb, irb, sizeof(*irb));
queue_work(vfio_ccw_work_q, &private->io_work);

View File

@ -124,6 +124,11 @@ static int vfio_ccw_mdev_create(struct kobject *kobj, struct mdev_device *mdev)
private->mdev = mdev;
private->state = VFIO_CCW_STATE_IDLE;
VFIO_CCW_MSG_EVENT(2, "mdev %pUl, sch %x.%x.%04x: create\n",
mdev_uuid(mdev), private->sch->schid.cssid,
private->sch->schid.ssid,
private->sch->schid.sch_no);
return 0;
}
@ -132,6 +137,11 @@ static int vfio_ccw_mdev_remove(struct mdev_device *mdev)
struct vfio_ccw_private *private =
dev_get_drvdata(mdev_parent_dev(mdev));
VFIO_CCW_MSG_EVENT(2, "mdev %pUl, sch %x.%x.%04x: remove\n",
mdev_uuid(mdev), private->sch->schid.cssid,
private->sch->schid.ssid,
private->sch->schid.sch_no);
if ((private->state != VFIO_CCW_STATE_NOT_OPER) &&
(private->state != VFIO_CCW_STATE_STANDBY)) {
if (!vfio_ccw_sch_quiesce(private->sch))

View File

@ -17,6 +17,7 @@
#include <linux/eventfd.h>
#include <linux/workqueue.h>
#include <linux/vfio_ccw.h>
#include <asm/debug.h>
#include "css.h"
#include "vfio_ccw_cp.h"
@ -139,4 +140,20 @@ static inline void vfio_ccw_fsm_event(struct vfio_ccw_private *private,
extern struct workqueue_struct *vfio_ccw_work_q;
/* s390 debug feature, similar to base cio */
extern debug_info_t *vfio_ccw_debug_msg_id;
extern debug_info_t *vfio_ccw_debug_trace_id;
#define VFIO_CCW_TRACE_EVENT(imp, txt) \
debug_text_event(vfio_ccw_debug_trace_id, imp, txt)
#define VFIO_CCW_MSG_EVENT(imp, args...) \
debug_sprintf_event(vfio_ccw_debug_msg_id, imp, ##args)
static inline void VFIO_CCW_HEX_EVENT(int level, void *data, int length)
{
debug_event(vfio_ccw_debug_trace_id, level, data, length);
}
#endif

View File

@ -7,7 +7,7 @@ ap-objs := ap_bus.o ap_card.o ap_queue.o
obj-$(subst m,y,$(CONFIG_ZCRYPT)) += ap.o
# zcrypt_api.o and zcrypt_msgtype*.o depend on ap.o
zcrypt-objs := zcrypt_api.o zcrypt_card.o zcrypt_queue.o
zcrypt-objs += zcrypt_msgtype6.o zcrypt_msgtype50.o
zcrypt-objs += zcrypt_msgtype6.o zcrypt_msgtype50.o zcrypt_ccamisc.o
obj-$(CONFIG_ZCRYPT) += zcrypt.o
# adapter drivers depend on ap.o and zcrypt.o
obj-$(CONFIG_ZCRYPT) += zcrypt_cex2c.o zcrypt_cex2a.o zcrypt_cex4.o

File diff suppressed because it is too large Load Diff

View File

@ -1143,7 +1143,7 @@ int vfio_ap_mdev_reset_queue(unsigned int apid, unsigned int apqi,
msleep(20);
status = ap_tapq(apqn, NULL);
}
WARN_ON_ONCE(retry <= 0);
WARN_ON_ONCE(retry2 <= 0);
return 0;
case AP_RESPONSE_RESET_IN_PROGRESS:
case AP_RESPONSE_BUSY:

View File

@ -35,6 +35,7 @@
#include "zcrypt_msgtype6.h"
#include "zcrypt_msgtype50.h"
#include "zcrypt_ccamisc.h"
/*
* Module description.
@ -1160,6 +1161,34 @@ void zcrypt_device_status_mask_ext(struct zcrypt_device_status_ext *devstatus)
}
EXPORT_SYMBOL(zcrypt_device_status_mask_ext);
int zcrypt_device_status_ext(int card, int queue,
struct zcrypt_device_status_ext *devstat)
{
struct zcrypt_card *zc;
struct zcrypt_queue *zq;
memset(devstat, 0, sizeof(*devstat));
spin_lock(&zcrypt_list_lock);
for_each_zcrypt_card(zc) {
for_each_zcrypt_queue(zq, zc) {
if (card == AP_QID_CARD(zq->queue->qid) &&
queue == AP_QID_QUEUE(zq->queue->qid)) {
devstat->hwtype = zc->card->ap_dev.device_type;
devstat->functions = zc->card->functions >> 26;
devstat->qid = zq->queue->qid;
devstat->online = zq->online ? 0x01 : 0x00;
spin_unlock(&zcrypt_list_lock);
return 0;
}
}
}
spin_unlock(&zcrypt_list_lock);
return -ENODEV;
}
EXPORT_SYMBOL(zcrypt_device_status_ext);
static void zcrypt_status_mask(char status[], size_t max_adapters)
{
struct zcrypt_card *zc;
@ -1874,6 +1903,7 @@ void __exit zcrypt_api_exit(void)
misc_deregister(&zcrypt_misc_device);
zcrypt_msgtype6_exit();
zcrypt_msgtype50_exit();
zcrypt_ccamisc_exit();
zcrypt_debug_exit();
}

View File

@ -121,9 +121,6 @@ void zcrypt_card_get(struct zcrypt_card *);
int zcrypt_card_put(struct zcrypt_card *);
int zcrypt_card_register(struct zcrypt_card *);
void zcrypt_card_unregister(struct zcrypt_card *);
struct zcrypt_card *zcrypt_card_get_best(unsigned int *,
unsigned int, unsigned int);
void zcrypt_card_put_best(struct zcrypt_card *, unsigned int);
struct zcrypt_queue *zcrypt_queue_alloc(size_t);
void zcrypt_queue_free(struct zcrypt_queue *);
@ -132,8 +129,6 @@ int zcrypt_queue_put(struct zcrypt_queue *);
int zcrypt_queue_register(struct zcrypt_queue *);
void zcrypt_queue_unregister(struct zcrypt_queue *);
void zcrypt_queue_force_online(struct zcrypt_queue *, int);
struct zcrypt_queue *zcrypt_queue_get_best(unsigned int, unsigned int);
void zcrypt_queue_put_best(struct zcrypt_queue *, unsigned int);
int zcrypt_rng_device_add(void);
void zcrypt_rng_device_remove(void);
@ -145,5 +140,7 @@ int zcrypt_api_init(void);
void zcrypt_api_exit(void);
long zcrypt_send_cprb(struct ica_xcRB *xcRB);
void zcrypt_device_status_mask_ext(struct zcrypt_device_status_ext *devstatus);
int zcrypt_device_status_ext(int card, int queue,
struct zcrypt_device_status_ext *devstatus);
#endif /* _ZCRYPT_API_H_ */

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,217 @@
/* SPDX-License-Identifier: GPL-2.0+ */
/*
* Copyright IBM Corp. 2019
* Author(s): Harald Freudenberger <freude@linux.ibm.com>
* Ingo Franzki <ifranzki@linux.ibm.com>
*
* Collection of CCA misc functions used by zcrypt and pkey
*/
#ifndef _ZCRYPT_CCAMISC_H_
#define _ZCRYPT_CCAMISC_H_
#include <asm/zcrypt.h>
#include <asm/pkey.h>
/* Key token types */
#define TOKTYPE_NON_CCA 0x00 /* Non-CCA key token */
#define TOKTYPE_CCA_INTERNAL 0x01 /* CCA internal key token */
/* For TOKTYPE_NON_CCA: */
#define TOKVER_PROTECTED_KEY 0x01 /* Protected key token */
/* For TOKTYPE_CCA_INTERNAL: */
#define TOKVER_CCA_AES 0x04 /* CCA AES key token */
#define TOKVER_CCA_VLSC 0x05 /* var length sym cipher key token */
/* Max size of a cca variable length cipher key token */
#define MAXCCAVLSCTOKENSIZE 725
/* header part of a CCA key token */
struct keytoken_header {
u8 type; /* one of the TOKTYPE values */
u8 res0[1];
u16 len; /* vlsc token: total length in bytes */
u8 version; /* one of the TOKVER values */
u8 res1[3];
} __packed;
/* inside view of a CCA secure key token (only type 0x01 version 0x04) */
struct secaeskeytoken {
u8 type; /* 0x01 for internal key token */
u8 res0[3];
u8 version; /* should be 0x04 */
u8 res1[1];
u8 flag; /* key flags */
u8 res2[1];
u64 mkvp; /* master key verification pattern */
u8 key[32]; /* key value (encrypted) */
u8 cv[8]; /* control vector */
u16 bitsize; /* key bit size */
u16 keysize; /* key byte size */
u8 tvv[4]; /* token validation value */
} __packed;
/* inside view of a variable length symmetric cipher AES key token */
struct cipherkeytoken {
u8 type; /* 0x01 for internal key token */
u8 res0[1];
u16 len; /* total key token length in bytes */
u8 version; /* should be 0x05 */
u8 res1[3];
u8 kms; /* key material state, 0x03 means wrapped with MK */
u8 kvpt; /* key verification pattern type, should be 0x01 */
u64 mkvp0; /* master key verification pattern, lo part */
u64 mkvp1; /* master key verification pattern, hi part (unused) */
u8 eskwm; /* encrypted section key wrapping method */
u8 hashalg; /* hash algorithmus used for wrapping key */
u8 plfver; /* pay load format version */
u8 res2[1];
u8 adsver; /* associated data section version */
u8 res3[1];
u16 adslen; /* associated data section length */
u8 kllen; /* optional key label length */
u8 ieaslen; /* optional extended associated data length */
u8 uadlen; /* optional user definable associated data length */
u8 res4[1];
u16 wpllen; /* wrapped payload length in bits: */
/* plfver 0x00 0x01 */
/* AES-128 512 640 */
/* AES-192 576 640 */
/* AES-256 640 640 */
u8 res5[1];
u8 algtype; /* 0x02 for AES cipher */
u16 keytype; /* 0x0001 for 'cipher' */
u8 kufc; /* key usage field count */
u16 kuf1; /* key usage field 1 */
u16 kuf2; /* key usage field 2 */
u8 kmfc; /* key management field count */
u16 kmf1; /* key management field 1 */
u16 kmf2; /* key management field 2 */
u16 kmf3; /* key management field 3 */
u8 vdata[0]; /* variable part data follows */
} __packed;
/* Some defines for the CCA AES cipherkeytoken kmf1 field */
#define KMF1_XPRT_SYM 0x8000
#define KMF1_XPRT_UASY 0x4000
#define KMF1_XPRT_AASY 0x2000
#define KMF1_XPRT_RAW 0x1000
#define KMF1_XPRT_CPAC 0x0800
#define KMF1_XPRT_DES 0x0080
#define KMF1_XPRT_AES 0x0040
#define KMF1_XPRT_RSA 0x0008
/*
* Simple check if the token is a valid CCA secure AES data key
* token. If keybitsize is given, the bitsize of the key is
* also checked. Returns 0 on success or errno value on failure.
*/
int cca_check_secaeskeytoken(debug_info_t *dbg, int dbflvl,
const u8 *token, int keybitsize);
/*
* Simple check if the token is a valid CCA secure AES cipher key
* token. If keybitsize is given, the bitsize of the key is
* also checked. If checkcpacfexport is enabled, the key is also
* checked for the export flag to allow CPACF export.
* Returns 0 on success or errno value on failure.
*/
int cca_check_secaescipherkey(debug_info_t *dbg, int dbflvl,
const u8 *token, int keybitsize,
int checkcpacfexport);
/*
* Generate (random) CCA AES DATA secure key.
*/
int cca_genseckey(u16 cardnr, u16 domain, u32 keybitsize, u8 *seckey);
/*
* Generate CCA AES DATA secure key with given clear key value.
*/
int cca_clr2seckey(u16 cardnr, u16 domain, u32 keybitsize,
const u8 *clrkey, u8 *seckey);
/*
* Derive proteced key from an CCA AES DATA secure key.
*/
int cca_sec2protkey(u16 cardnr, u16 domain,
const u8 seckey[SECKEYBLOBSIZE],
u8 *protkey, u32 *protkeylen, u32 *protkeytype);
/*
* Generate (random) CCA AES CIPHER secure key.
*/
int cca_gencipherkey(u16 cardnr, u16 domain, u32 keybitsize, u32 keygenflags,
u8 *keybuf, size_t *keybufsize);
/*
* Derive proteced key from CCA AES cipher secure key.
*/
int cca_cipher2protkey(u16 cardnr, u16 domain, const u8 *ckey,
u8 *protkey, u32 *protkeylen, u32 *protkeytype);
/*
* Build CCA AES CIPHER secure key with a given clear key value.
*/
int cca_clr2cipherkey(u16 cardnr, u16 domain, u32 keybitsize, u32 keygenflags,
const u8 *clrkey, u8 *keybuf, size_t *keybufsize);
/*
* Query cryptographic facility from CCA adapter
*/
int cca_query_crypto_facility(u16 cardnr, u16 domain,
const char *keyword,
u8 *rarray, size_t *rarraylen,
u8 *varray, size_t *varraylen);
/*
* Search for a matching crypto card based on the Master Key
* Verification Pattern provided inside a secure key.
* Works with CCA AES data and cipher keys.
* Returns < 0 on failure, 0 if CURRENT MKVP matches and
* 1 if OLD MKVP matches.
*/
int cca_findcard(const u8 *key, u16 *pcardnr, u16 *pdomain, int verify);
/*
* Build a list of cca apqns meeting the following constrains:
* - apqn is online and is in fact a CCA apqn
* - if cardnr is not FFFF only apqns with this cardnr
* - if domain is not FFFF only apqns with this domainnr
* - if minhwtype > 0 only apqns with hwtype >= minhwtype
* - if cur_mkvp != 0 only apqns where cur_mkvp == mkvp
* - if old_mkvp != 0 only apqns where old_mkvp == mkvp
* - if verify is enabled and a cur_mkvp and/or old_mkvp
* value is given, then refetch the cca_info and make sure the current
* cur_mkvp or old_mkvp values of the apqn are used.
* The array of apqn entries is allocated with kmalloc and returned in *apqns;
* the number of apqns stored into the list is returned in *nr_apqns. One apqn
* entry is simple a 32 bit value with 16 bit cardnr and 16 bit domain nr and
* may be casted to struct pkey_apqn. The return value is either 0 for success
* or a negative errno value. If no apqn meeting the criterias is found,
* -ENODEV is returned.
*/
int cca_findcard2(u32 **apqns, u32 *nr_apqns, u16 cardnr, u16 domain,
int minhwtype, u64 cur_mkvp, u64 old_mkvp, int verify);
/* struct to hold info for each CCA queue */
struct cca_info {
int hwtype; /* one of the defined AP_DEVICE_TYPE_* */
char new_mk_state; /* '1' empty, '2' partially full, '3' full */
char cur_mk_state; /* '1' invalid, '2' valid */
char old_mk_state; /* '1' invalid, '2' valid */
u64 new_mkvp; /* truncated sha256 hash of new master key */
u64 cur_mkvp; /* truncated sha256 hash of current master key */
u64 old_mkvp; /* truncated sha256 hash of old master key */
char serial[9]; /* serial number string (8 ascii numbers + 0x00) */
};
/*
* Fetch cca information about an CCA queue.
*/
int cca_get_info(u16 card, u16 dom, struct cca_info *ci, int verify);
void zcrypt_ccamisc_exit(void);
#endif /* _ZCRYPT_CCAMISC_H_ */

View File

@ -18,6 +18,7 @@
#include "zcrypt_msgtype50.h"
#include "zcrypt_error.h"
#include "zcrypt_cex4.h"
#include "zcrypt_ccamisc.h"
#define CEX4A_MIN_MOD_SIZE 1 /* 8 bits */
#define CEX4A_MAX_MOD_SIZE_2K 256 /* 2048 bits */
@ -65,6 +66,85 @@ static struct ap_device_id zcrypt_cex4_queue_ids[] = {
MODULE_DEVICE_TABLE(ap, zcrypt_cex4_queue_ids);
/*
* CCA card addditional device attributes
*/
static ssize_t serialnr_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct cca_info ci;
struct ap_card *ac = to_ap_card(dev);
struct zcrypt_card *zc = ac->private;
memset(&ci, 0, sizeof(ci));
if (ap_domain_index >= 0)
cca_get_info(ac->id, ap_domain_index, &ci, zc->online);
return snprintf(buf, PAGE_SIZE, "%s\n", ci.serial);
}
static DEVICE_ATTR_RO(serialnr);
static struct attribute *cca_card_attrs[] = {
&dev_attr_serialnr.attr,
NULL,
};
static const struct attribute_group cca_card_attr_group = {
.attrs = cca_card_attrs,
};
/*
* CCA queue addditional device attributes
*/
static ssize_t mkvps_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
int n = 0;
struct cca_info ci;
struct zcrypt_queue *zq = to_ap_queue(dev)->private;
static const char * const cao_state[] = { "invalid", "valid" };
static const char * const new_state[] = { "empty", "partial", "full" };
memset(&ci, 0, sizeof(ci));
cca_get_info(AP_QID_CARD(zq->queue->qid),
AP_QID_QUEUE(zq->queue->qid),
&ci, zq->online);
if (ci.new_mk_state >= '1' && ci.new_mk_state <= '3')
n = snprintf(buf, PAGE_SIZE, "AES NEW: %s 0x%016llx\n",
new_state[ci.new_mk_state - '1'], ci.new_mkvp);
else
n = snprintf(buf, PAGE_SIZE, "AES NEW: - -\n");
if (ci.cur_mk_state >= '1' && ci.cur_mk_state <= '2')
n += snprintf(buf + n, PAGE_SIZE - n, "AES CUR: %s 0x%016llx\n",
cao_state[ci.cur_mk_state - '1'], ci.cur_mkvp);
else
n += snprintf(buf + n, PAGE_SIZE - n, "AES CUR: - -\n");
if (ci.old_mk_state >= '1' && ci.old_mk_state <= '2')
n += snprintf(buf + n, PAGE_SIZE - n, "AES OLD: %s 0x%016llx\n",
cao_state[ci.old_mk_state - '1'], ci.old_mkvp);
else
n += snprintf(buf + n, PAGE_SIZE - n, "AES OLD: - -\n");
return n;
}
static DEVICE_ATTR_RO(mkvps);
static struct attribute *cca_queue_attrs[] = {
&dev_attr_mkvps.attr,
NULL,
};
static const struct attribute_group cca_queue_attr_group = {
.attrs = cca_queue_attrs,
};
/**
* Probe function for CEX4/CEX5/CEX6 card device. It always
* accepts the AP device since the bus_match already checked
@ -194,8 +274,17 @@ static int zcrypt_cex4_card_probe(struct ap_device *ap_dev)
if (rc) {
ac->private = NULL;
zcrypt_card_free(zc);
goto out;
}
if (ap_test_bit(&ac->functions, AP_FUNC_COPRO)) {
rc = sysfs_create_group(&ap_dev->device.kobj,
&cca_card_attr_group);
if (rc)
zcrypt_card_unregister(zc);
}
out:
return rc;
}
@ -205,8 +294,11 @@ static int zcrypt_cex4_card_probe(struct ap_device *ap_dev)
*/
static void zcrypt_cex4_card_remove(struct ap_device *ap_dev)
{
struct zcrypt_card *zc = to_ap_card(&ap_dev->device)->private;
struct ap_card *ac = to_ap_card(&ap_dev->device);
struct zcrypt_card *zc = ac->private;
if (ap_test_bit(&ac->functions, AP_FUNC_COPRO))
sysfs_remove_group(&ap_dev->device.kobj, &cca_card_attr_group);
if (zc)
zcrypt_card_unregister(zc);
}
@ -251,6 +343,7 @@ static int zcrypt_cex4_queue_probe(struct ap_device *ap_dev)
} else {
return -ENODEV;
}
zq->queue = aq;
zq->online = 1;
atomic_set(&zq->load, 0);
@ -261,8 +354,17 @@ static int zcrypt_cex4_queue_probe(struct ap_device *ap_dev)
if (rc) {
aq->private = NULL;
zcrypt_queue_free(zq);
goto out;
}
if (ap_test_bit(&aq->card->functions, AP_FUNC_COPRO)) {
rc = sysfs_create_group(&ap_dev->device.kobj,
&cca_queue_attr_group);
if (rc)
zcrypt_queue_unregister(zq);
}
out:
return rc;
}
@ -275,6 +377,8 @@ static void zcrypt_cex4_queue_remove(struct ap_device *ap_dev)
struct ap_queue *aq = to_ap_queue(&ap_dev->device);
struct zcrypt_queue *zq = aq->private;
if (ap_test_bit(&aq->card->functions, AP_FUNC_COPRO))
sysfs_remove_group(&ap_dev->device.kobj, &cca_queue_attr_group);
if (zq)
zcrypt_queue_unregister(zq);
}