SCSI misc on 20151113

Sorry for the delay in this patch which was mostly caused by getting the
 merger of the mpt2/mpt3sas driver, which was seen as an essential item of
 maintenance work to do before the drivers diverge too much.  Unfortunately,
 this caused a compile failure (detected by linux-next), which then had to be
 fixed up and incubated.  In addition to the mpt2/3sas rework, there are
 updates from pm80xx, lpfc, bnx2fc, hpsa, ipr, aacraid, megaraid_sas, storvsc
 and ufs plus an assortment of changes including some year 2038 issues, a fix
 for a remove before detach issue in some drivers and a couple of other minor
 issues.
 
 Signed-off-by: James Bottomley <JBottomley@Odin.com>
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2
 
 iQEcBAABAgAGBQJWRnpiAAoJEDeqqVYsXL0MJd0IAIkNP1q/6ksMw/lam2UlbdxN
 zFsFOIhGM3xLnBFehbCx7JW3cmybA7WC5jhMjaa1OeJmYHLpwbTzvVwrLs2l/0y1
 /S+G0wwxROZfIKj/1EO3oKbSPCu9N3lStxOnmNXt5PSUEzAXrTqSNTtnMf2ZGh7j
 bQxTWEJe+66GckgGw4ozTXJHWXqM/Zs/FsYjn2h/WzFhFv8utr7zRSzHMVjcqpLG
 TK/Lt03hiTGqiignKrV9W6JzdGTWf2LGIsj/njgR0dxv59cNH8PrHIjZyknSBxgn
 lblinsCjxDHWnP4BSfTi9MQG1lEiZiWO3Y6TKkKJTgxZ9M0Eitspc+cLOiJ1mpg=
 =HvQf
 -----END PGP SIGNATURE-----

Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Pull final round of SCSI updates from James Bottomley:
 "Sorry for the delay in this patch which was mostly caused by getting
  the merger of the mpt2/mpt3sas driver, which was seen as an essential
  item of maintenance work to do before the drivers diverge too much.
  Unfortunately, this caused a compile failure (detected by linux-next),
  which then had to be fixed up and incubated.

  In addition to the mpt2/3sas rework, there are updates from pm80xx,
  lpfc, bnx2fc, hpsa, ipr, aacraid, megaraid_sas, storvsc and ufs plus
  an assortment of changes including some year 2038 issues, a fix for a
  remove before detach issue in some drivers and a couple of other minor
  issues"

* tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (141 commits)
  mpt3sas: fix inline markers on non inline function declarations
  sd: Clear PS bit before Mode Select.
  ibmvscsi: set max_lun to 32
  ibmvscsi: display default value for max_id, max_lun and max_channel.
  mptfusion: don't allow negative bytes in kbuf_alloc_2_sgl()
  scsi: pmcraid: replace struct timeval with ktime_get_real_seconds()
  mvumi: 64bit value for seconds_since1970
  be2iscsi: Fix bogus WARN_ON length check
  scsi_scan: don't dump trace when scsi_prep_async_scan() is called twice
  mpt3sas: Bump mpt3sas driver version to 09.102.00.00
  mpt3sas: Single driver module which supports both SAS 2.0 & SAS 3.0 HBAs
  mpt2sas, mpt3sas: Update the driver versions
  mpt3sas: setpci reset kernel oops fix
  mpt3sas: Added OEM Gen2 PnP ID branding names
  mpt3sas: Refcount fw_events and fix unsafe list usage
  mpt3sas: Refcount sas_device objects and fix unsafe list usage
  mpt3sas: sysfs attribute to report Backup Rail Monitor Status
  mpt3sas: Ported WarpDrive product SSS6200 support
  mpt3sas: fix for driver fails EEH, recovery from injected pci bus error
  mpt3sas: Manage MSI-X vectors according to HBA device type
  ...
This commit is contained in:
Linus Torvalds 2015-11-13 20:35:54 -08:00
commit d83763f4a6
121 changed files with 6608 additions and 32748 deletions

View File

@ -0,0 +1,12 @@
What: /sys/bus/scsi/drivers/st/debug_flag
Date: October 2015
Kernel Version: ?.?
Contact: shane.seymour@hpe.com
Description:
This file allows you to turn debug output from the st driver
off if you write a '0' to the file or on if you write a '1'.
Note that debug output requires that the module be compiled
with the #define DEBUG set to a non-zero value (this is the
default). If DEBUG is set to 0 then this file will not
appear in sysfs as its presence is conditional upon debug
output support being compiled into the module.

View File

@ -0,0 +1,58 @@
* Qualcomm Technologies Inc Universal Flash Storage (UFS) PHY
UFSPHY nodes are defined to describe on-chip UFS PHY hardware macro.
Each UFS PHY node should have its own node.
To bind UFS PHY with UFS host controller, the controller node should
contain a phandle reference to UFS PHY node.
Required properties:
- compatible : compatible list, contains "qcom,ufs-phy-qmp-20nm"
or "qcom,ufs-phy-qmp-14nm" according to the relevant phy in use.
- reg : should contain PHY register address space (mandatory),
- reg-names : indicates various resources passed to driver (via reg proptery) by name.
Required "reg-names" is "phy_mem".
- #phy-cells : This property shall be set to 0
- vdda-phy-supply : phandle to main PHY supply for analog domain
- vdda-pll-supply : phandle to PHY PLL and Power-Gen block power supply
- clocks : List of phandle and clock specifier pairs
- clock-names : List of clock input name strings sorted in the same
order as the clocks property. "ref_clk_src", "ref_clk",
"tx_iface_clk" & "rx_iface_clk" are mandatory but
"ref_clk_parent" is optional
Optional properties:
- vdda-phy-max-microamp : specifies max. load that can be drawn from phy supply
- vdda-pll-max-microamp : specifies max. load that can be drawn from pll supply
- vddp-ref-clk-supply : phandle to UFS device ref_clk pad power supply
- vddp-ref-clk-max-microamp : specifies max. load that can be drawn from this supply
- vddp-ref-clk-always-on : specifies if this supply needs to be kept always on
Example:
ufsphy1: ufsphy@0xfc597000 {
compatible = "qcom,ufs-phy-qmp-20nm";
reg = <0xfc597000 0x800>;
reg-names = "phy_mem";
#phy-cells = <0>;
vdda-phy-supply = <&pma8084_l4>;
vdda-pll-supply = <&pma8084_l12>;
vdda-phy-max-microamp = <50000>;
vdda-pll-max-microamp = <1000>;
clock-names = "ref_clk_src",
"ref_clk_parent",
"ref_clk",
"tx_iface_clk",
"rx_iface_clk";
clocks = <&clock_rpm clk_ln_bb_clk>,
<&clock_gcc clk_pcie_1_phy_ldo >,
<&clock_gcc clk_ufs_phy_ldo>,
<&clock_gcc clk_gcc_ufs_tx_cfg_clk>,
<&clock_gcc clk_gcc_ufs_rx_cfg_clk>;
};
ufshc@0xfc598000 {
...
phys = <&ufsphy1>;
phy-names = "ufsphy";
};

View File

@ -4,11 +4,18 @@ UFSHC nodes are defined to describe on-chip UFS host controllers.
Each UFS controller instance should have its own node.
Required properties:
- compatible : compatible list, contains "jedec,ufs-1.1"
- compatible : must contain "jedec,ufs-1.1", may also list one or more
of the following:
"qcom,msm8994-ufshc"
"qcom,msm8996-ufshc"
"qcom,ufshc"
- interrupts : <interrupt mapping for UFS host controller IRQ>
- reg : <registers mapping>
Optional properties:
- phys : phandle to UFS PHY node
- phy-names : the string "ufsphy" when is found in a node, along
with "phys" attribute, provides phandle to UFS PHY node
- vdd-hba-supply : phandle to UFS host controller supply regulator node
- vcc-supply : phandle to VCC supply regulator node
- vccq-supply : phandle to VCCQ supply regulator node
@ -54,4 +61,6 @@ Example:
clocks = <&core 0>, <&ref 0>, <&iface 0>;
clock-names = "core_clk", "ref_clk", "iface_clk";
freq-table-hz = <100000000 200000000>, <0 0>, <0 0>;
phys = <&ufsphy1>;
phy-names = "ufsphy";
};

View File

@ -569,7 +569,9 @@ Debugging code is now compiled in by default but debugging is turned off
with the kernel module parameter debug_flag defaulting to 0. Debugging
can still be switched on and off with an ioctl. To enable debug at
module load time add debug_flag=1 to the module load options, the
debugging output is not voluminous.
debugging output is not voluminous. Debugging can also be enabled
and disabled by writing a '0' (disable) or '1' (enable) to the sysfs
file /sys/bus/scsi/drivers/st/debug_flag.
If the tape seems to hang, I would be very interested to hear where
the driver is waiting. With the command 'ps -l' you can see the state

View File

@ -3696,9 +3696,6 @@ int ata_scsi_add_hosts(struct ata_host *host, struct scsi_host_template *sht)
*/
shost->max_host_blocked = 1;
if (scsi_init_shared_tag_map(shost, host->n_tags))
goto err_add;
rc = scsi_add_host_with_dma(ap->scsi_host,
&ap->tdev, ap->host->dev);
if (rc)

View File

@ -2798,7 +2798,6 @@ static struct scsi_host_template srp_template = {
.cmd_per_lun = SRP_DEFAULT_CMD_SQ_SIZE,
.use_clustering = ENABLE_CLUSTERING,
.shost_attrs = srp_host_attrs,
.use_blk_tags = 1,
.track_queue_depth = 1,
};
@ -3229,10 +3228,6 @@ static ssize_t srp_create_target(struct device *dev,
if (ret)
goto out;
ret = scsi_init_shared_tag_map(target_host, target_host->can_queue);
if (ret)
goto out;
target->req_ring_size = target->queue_size - SRP_TSK_MGMT_SQ_SIZE;
if (!srp_conn_unique(target->srp_host, target)) {

View File

@ -1038,6 +1038,10 @@ kbuf_alloc_2_sgl(int bytes, u32 sgdir, int sge_offset, int *frags,
int i, buflist_ent;
int sg_spill = MAX_FRAGS_SPILL1;
int dir;
if (bytes < 0)
return NULL;
/* initialization */
*frags = 0;
*blp = NULL;

View File

@ -1994,7 +1994,6 @@ static struct scsi_host_template mptsas_driver_template = {
.cmd_per_lun = 7,
.use_clustering = ENABLE_CLUSTERING,
.shost_attrs = mptscsih_host_attrs,
.use_blk_tags = 1,
};
static int mptsas_get_linkerrors(struct sas_phy *phy)

View File

@ -325,7 +325,6 @@ NCR_700_detect(struct scsi_host_template *tpnt,
tpnt->slave_destroy = NCR_700_slave_destroy;
tpnt->slave_alloc = NCR_700_slave_alloc;
tpnt->change_queue_depth = NCR_700_change_queue_depth;
tpnt->use_blk_tags = 1;
if(tpnt->name == NULL)
tpnt->name = "53c700";
@ -1107,7 +1106,9 @@ process_script_interrupt(__u32 dsps, __u32 dsp, struct scsi_cmnd *SCp,
BUG();
}
if(hostdata->msgin[1] == A_SIMPLE_TAG_MSG) {
struct scsi_cmnd *SCp = scsi_find_tag(SDp, hostdata->msgin[2]);
struct scsi_cmnd *SCp;
SCp = scsi_host_find_tag(SDp->host, hostdata->msgin[2]);
if(unlikely(SCp == NULL)) {
printk(KERN_ERR "scsi%d: (%d:%d) no saved request for tag %d\n",
host->host_no, reselection_id, lun, hostdata->msgin[2]);
@ -1119,7 +1120,9 @@ process_script_interrupt(__u32 dsps, __u32 dsp, struct scsi_cmnd *SCp,
"reselection is tag %d, slot %p(%d)\n",
hostdata->msgin[2], slot, slot->tag);
} else {
struct scsi_cmnd *SCp = scsi_find_tag(SDp, SCSI_NO_TAG);
struct scsi_cmnd *SCp;
SCp = scsi_host_find_tag(SDp->host, SCSI_NO_TAG);
if(unlikely(SCp == NULL)) {
sdev_printk(KERN_ERR, SDp,
"no saved request for untagged cmd\n");
@ -1823,7 +1826,7 @@ NCR_700_queuecommand_lck(struct scsi_cmnd *SCp, void (*done)(struct scsi_cmnd *)
slot->tag, slot);
} else {
slot->tag = SCSI_NO_TAG;
/* must populate current_cmnd for scsi_find_tag to work */
/* must populate current_cmnd for scsi_host_find_tag to work */
SCp->device->current_cmnd = SCp;
}
/* sanity check: some of the commands generated by the mid-layer

View File

@ -2136,7 +2136,7 @@ static unsigned char FPT_SccbMgr_bad_isr(u32 p_port, unsigned char p_card,
*
*---------------------------------------------------------------------*/
static void FPT_SccbMgrTableInitAll()
static void FPT_SccbMgrTableInitAll(void)
{
unsigned char thisCard;

View File

@ -534,7 +534,6 @@ config SCSI_ARCMSR
source "drivers/scsi/esas2r/Kconfig"
source "drivers/scsi/megaraid/Kconfig.megaraid"
source "drivers/scsi/mpt2sas/Kconfig"
source "drivers/scsi/mpt3sas/Kconfig"
source "drivers/scsi/ufs/Kconfig"

View File

@ -106,7 +106,6 @@ obj-$(CONFIG_CXLFLASH) += cxlflash/
obj-$(CONFIG_MEGARAID_LEGACY) += megaraid.o
obj-$(CONFIG_MEGARAID_NEWGEN) += megaraid/
obj-$(CONFIG_MEGARAID_SAS) += megaraid/
obj-$(CONFIG_SCSI_MPT2SAS) += mpt2sas/
obj-$(CONFIG_SCSI_MPT3SAS) += mpt3sas/
obj-$(CONFIG_SCSI_UFSHCD) += ufs/
obj-$(CONFIG_SCSI_ACARD) += atp870u.o

View File

@ -259,7 +259,7 @@ MODULE_PARM_DESC(commit, "Control whether a COMMIT_CONFIG is issued to the"
" 0=off, 1=on");
module_param_named(msi, aac_msi, int, S_IRUGO|S_IWUSR);
MODULE_PARM_DESC(msi, "IRQ handling."
" 0=PIC(default), 1=MSI, 2=MSI-X(unsupported, uses MSI)");
" 0=PIC(default), 1=MSI, 2=MSI-X)");
module_param(startup_timeout, int, S_IRUGO|S_IWUSR);
MODULE_PARM_DESC(startup_timeout, "The duration of time in seconds to wait for"
" adapter to have it's kernel up and\n"
@ -570,7 +570,7 @@ static int aac_get_container_name(struct scsi_cmnd * scsicmd)
status = aac_fib_send(ContainerCommand,
cmd_fibcontext,
sizeof (struct aac_get_name),
sizeof(struct aac_get_name_resp),
FsaNormal,
0, 1,
(fib_callback)get_container_name_callback,
@ -1052,7 +1052,7 @@ static int aac_get_container_serial(struct scsi_cmnd * scsicmd)
status = aac_fib_send(ContainerCommand,
cmd_fibcontext,
sizeof (struct aac_get_serial),
sizeof(struct aac_get_serial_resp),
FsaNormal,
0, 1,
(fib_callback) get_container_serial_callback,
@ -2977,11 +2977,16 @@ static void aac_srb_callback(void *context, struct fib * fibptr)
return;
BUG_ON(fibptr == NULL);
dev = fibptr->dev;
srbreply = (struct aac_srb_reply *) fib_data(fibptr);
scsi_dma_unmap(scsicmd);
/* expose physical device if expose_physicald flag is on */
if (scsicmd->cmnd[0] == INQUIRY && !(scsicmd->cmnd[1] & 0x01)
&& expose_physicals > 0)
aac_expose_phy_device(scsicmd);
srbreply = (struct aac_srb_reply *) fib_data(fibptr);
scsicmd->sense_buffer[0] = '\0'; /* Initialize sense valid flag to false */
if (fibptr->flags & FIB_CONTEXT_FLAG_FASTRESP) {
@ -2994,147 +2999,157 @@ static void aac_srb_callback(void *context, struct fib * fibptr)
*/
scsi_set_resid(scsicmd, scsi_bufflen(scsicmd)
- le32_to_cpu(srbreply->data_xfer_length));
}
/*
* First check the fib status
*/
scsi_dma_unmap(scsicmd);
if (le32_to_cpu(srbreply->status) != ST_OK) {
int len;
/* expose physical device if expose_physicald flag is on */
if (scsicmd->cmnd[0] == INQUIRY && !(scsicmd->cmnd[1] & 0x01)
&& expose_physicals > 0)
aac_expose_phy_device(scsicmd);
printk(KERN_WARNING "aac_srb_callback: srb failed, status = %d\n", le32_to_cpu(srbreply->status));
len = min_t(u32, le32_to_cpu(srbreply->sense_data_size),
SCSI_SENSE_BUFFERSIZE);
scsicmd->result = DID_ERROR << 16
| COMMAND_COMPLETE << 8
| SAM_STAT_CHECK_CONDITION;
memcpy(scsicmd->sense_buffer,
srbreply->sense_data, len);
}
/*
* First check the fib status
*/
if (le32_to_cpu(srbreply->status) != ST_OK){
int len;
printk(KERN_WARNING "aac_srb_callback: srb failed, status = %d\n", le32_to_cpu(srbreply->status));
len = min_t(u32, le32_to_cpu(srbreply->sense_data_size),
SCSI_SENSE_BUFFERSIZE);
scsicmd->result = DID_ERROR << 16 | COMMAND_COMPLETE << 8 | SAM_STAT_CHECK_CONDITION;
memcpy(scsicmd->sense_buffer, srbreply->sense_data, len);
}
/*
* Next check the srb status
*/
switch( (le32_to_cpu(srbreply->srb_status))&0x3f){
case SRB_STATUS_ERROR_RECOVERY:
case SRB_STATUS_PENDING:
case SRB_STATUS_SUCCESS:
scsicmd->result = DID_OK << 16 | COMMAND_COMPLETE << 8;
break;
case SRB_STATUS_DATA_OVERRUN:
switch(scsicmd->cmnd[0]){
case READ_6:
case WRITE_6:
case READ_10:
case WRITE_10:
case READ_12:
case WRITE_12:
case READ_16:
case WRITE_16:
if (le32_to_cpu(srbreply->data_xfer_length) < scsicmd->underflow) {
printk(KERN_WARNING"aacraid: SCSI CMD underflow\n");
} else {
printk(KERN_WARNING"aacraid: SCSI CMD Data Overrun\n");
}
scsicmd->result = DID_ERROR << 16 | COMMAND_COMPLETE << 8;
break;
case INQUIRY: {
/*
* Next check the srb status
*/
switch ((le32_to_cpu(srbreply->srb_status))&0x3f) {
case SRB_STATUS_ERROR_RECOVERY:
case SRB_STATUS_PENDING:
case SRB_STATUS_SUCCESS:
scsicmd->result = DID_OK << 16 | COMMAND_COMPLETE << 8;
break;
}
default:
scsicmd->result = DID_OK << 16 | COMMAND_COMPLETE << 8;
break;
}
break;
case SRB_STATUS_ABORTED:
scsicmd->result = DID_ABORT << 16 | ABORT << 8;
break;
case SRB_STATUS_ABORT_FAILED:
// Not sure about this one - but assuming the hba was trying to abort for some reason
scsicmd->result = DID_ERROR << 16 | ABORT << 8;
break;
case SRB_STATUS_PARITY_ERROR:
scsicmd->result = DID_PARITY << 16 | MSG_PARITY_ERROR << 8;
break;
case SRB_STATUS_NO_DEVICE:
case SRB_STATUS_INVALID_PATH_ID:
case SRB_STATUS_INVALID_TARGET_ID:
case SRB_STATUS_INVALID_LUN:
case SRB_STATUS_SELECTION_TIMEOUT:
scsicmd->result = DID_NO_CONNECT << 16 | COMMAND_COMPLETE << 8;
break;
case SRB_STATUS_COMMAND_TIMEOUT:
case SRB_STATUS_TIMEOUT:
scsicmd->result = DID_TIME_OUT << 16 | COMMAND_COMPLETE << 8;
break;
case SRB_STATUS_BUSY:
scsicmd->result = DID_BUS_BUSY << 16 | COMMAND_COMPLETE << 8;
break;
case SRB_STATUS_BUS_RESET:
scsicmd->result = DID_RESET << 16 | COMMAND_COMPLETE << 8;
break;
case SRB_STATUS_MESSAGE_REJECTED:
scsicmd->result = DID_ERROR << 16 | MESSAGE_REJECT << 8;
break;
case SRB_STATUS_REQUEST_FLUSHED:
case SRB_STATUS_ERROR:
case SRB_STATUS_INVALID_REQUEST:
case SRB_STATUS_REQUEST_SENSE_FAILED:
case SRB_STATUS_NO_HBA:
case SRB_STATUS_UNEXPECTED_BUS_FREE:
case SRB_STATUS_PHASE_SEQUENCE_FAILURE:
case SRB_STATUS_BAD_SRB_BLOCK_LENGTH:
case SRB_STATUS_DELAYED_RETRY:
case SRB_STATUS_BAD_FUNCTION:
case SRB_STATUS_NOT_STARTED:
case SRB_STATUS_NOT_IN_USE:
case SRB_STATUS_FORCE_ABORT:
case SRB_STATUS_DOMAIN_VALIDATION_FAIL:
default:
#ifdef AAC_DETAILED_STATUS_INFO
printk("aacraid: SRB ERROR(%u) %s scsi cmd 0x%x - scsi status 0x%x\n",
le32_to_cpu(srbreply->srb_status) & 0x3F,
aac_get_status_string(
le32_to_cpu(srbreply->srb_status) & 0x3F),
scsicmd->cmnd[0],
le32_to_cpu(srbreply->scsi_status));
#endif
if ((scsicmd->cmnd[0] == ATA_12)
|| (scsicmd->cmnd[0] == ATA_16)) {
if (scsicmd->cmnd[2] & (0x01 << 5)) {
scsicmd->result = DID_OK << 16
| COMMAND_COMPLETE << 8;
case SRB_STATUS_DATA_OVERRUN:
switch (scsicmd->cmnd[0]) {
case READ_6:
case WRITE_6:
case READ_10:
case WRITE_10:
case READ_12:
case WRITE_12:
case READ_16:
case WRITE_16:
if (le32_to_cpu(srbreply->data_xfer_length)
< scsicmd->underflow)
printk(KERN_WARNING"aacraid: SCSI CMD underflow\n");
else
printk(KERN_WARNING"aacraid: SCSI CMD Data Overrun\n");
scsicmd->result = DID_ERROR << 16
| COMMAND_COMPLETE << 8;
break;
case INQUIRY: {
scsicmd->result = DID_OK << 16
| COMMAND_COMPLETE << 8;
break;
}
default:
scsicmd->result = DID_OK << 16 | COMMAND_COMPLETE << 8;
break;
}
break;
case SRB_STATUS_ABORTED:
scsicmd->result = DID_ABORT << 16 | ABORT << 8;
break;
case SRB_STATUS_ABORT_FAILED:
/*
* Not sure about this one - but assuming the
* hba was trying to abort for some reason
*/
scsicmd->result = DID_ERROR << 16 | ABORT << 8;
break;
case SRB_STATUS_PARITY_ERROR:
scsicmd->result = DID_PARITY << 16
| MSG_PARITY_ERROR << 8;
break;
case SRB_STATUS_NO_DEVICE:
case SRB_STATUS_INVALID_PATH_ID:
case SRB_STATUS_INVALID_TARGET_ID:
case SRB_STATUS_INVALID_LUN:
case SRB_STATUS_SELECTION_TIMEOUT:
scsicmd->result = DID_NO_CONNECT << 16
| COMMAND_COMPLETE << 8;
break;
case SRB_STATUS_COMMAND_TIMEOUT:
case SRB_STATUS_TIMEOUT:
scsicmd->result = DID_TIME_OUT << 16
| COMMAND_COMPLETE << 8;
break;
case SRB_STATUS_BUSY:
scsicmd->result = DID_BUS_BUSY << 16
| COMMAND_COMPLETE << 8;
break;
case SRB_STATUS_BUS_RESET:
scsicmd->result = DID_RESET << 16
| COMMAND_COMPLETE << 8;
break;
case SRB_STATUS_MESSAGE_REJECTED:
scsicmd->result = DID_ERROR << 16
| MESSAGE_REJECT << 8;
break;
case SRB_STATUS_REQUEST_FLUSHED:
case SRB_STATUS_ERROR:
case SRB_STATUS_INVALID_REQUEST:
case SRB_STATUS_REQUEST_SENSE_FAILED:
case SRB_STATUS_NO_HBA:
case SRB_STATUS_UNEXPECTED_BUS_FREE:
case SRB_STATUS_PHASE_SEQUENCE_FAILURE:
case SRB_STATUS_BAD_SRB_BLOCK_LENGTH:
case SRB_STATUS_DELAYED_RETRY:
case SRB_STATUS_BAD_FUNCTION:
case SRB_STATUS_NOT_STARTED:
case SRB_STATUS_NOT_IN_USE:
case SRB_STATUS_FORCE_ABORT:
case SRB_STATUS_DOMAIN_VALIDATION_FAIL:
default:
#ifdef AAC_DETAILED_STATUS_INFO
printk(KERN_INFO "aacraid: SRB ERROR(%u) %s scsi cmd 0x%x - scsi status 0x%x\n",
le32_to_cpu(srbreply->srb_status) & 0x3F,
aac_get_status_string(
le32_to_cpu(srbreply->srb_status) & 0x3F),
scsicmd->cmnd[0],
le32_to_cpu(srbreply->scsi_status));
#endif
if ((scsicmd->cmnd[0] == ATA_12)
|| (scsicmd->cmnd[0] == ATA_16)) {
if (scsicmd->cmnd[2] & (0x01 << 5)) {
scsicmd->result = DID_OK << 16
| COMMAND_COMPLETE << 8;
break;
} else {
scsicmd->result = DID_ERROR << 16
| COMMAND_COMPLETE << 8;
break;
}
} else {
scsicmd->result = DID_ERROR << 16
| COMMAND_COMPLETE << 8;
| COMMAND_COMPLETE << 8;
break;
}
} else {
scsicmd->result = DID_ERROR << 16
| COMMAND_COMPLETE << 8;
break;
}
}
if (le32_to_cpu(srbreply->scsi_status) == SAM_STAT_CHECK_CONDITION) {
int len;
scsicmd->result |= SAM_STAT_CHECK_CONDITION;
len = min_t(u32, le32_to_cpu(srbreply->sense_data_size),
SCSI_SENSE_BUFFERSIZE);
if (le32_to_cpu(srbreply->scsi_status)
== SAM_STAT_CHECK_CONDITION) {
int len;
scsicmd->result |= SAM_STAT_CHECK_CONDITION;
len = min_t(u32, le32_to_cpu(srbreply->sense_data_size),
SCSI_SENSE_BUFFERSIZE);
#ifdef AAC_DETAILED_STATUS_INFO
printk(KERN_WARNING "aac_srb_callback: check condition, status = %d len=%d\n",
le32_to_cpu(srbreply->status), len);
printk(KERN_WARNING "aac_srb_callback: check condition, status = %d len=%d\n",
le32_to_cpu(srbreply->status), len);
#endif
memcpy(scsicmd->sense_buffer, srbreply->sense_data, len);
memcpy(scsicmd->sense_buffer,
srbreply->sense_data, len);
}
}
/*
* OR in the scsi status (already shifted up a bit)

View File

@ -12,7 +12,7 @@
* D E F I N E S
*----------------------------------------------------------------------------*/
#define AAC_MAX_MSIX 8 /* vectors */
#define AAC_MAX_MSIX 32 /* vectors */
#define AAC_PCI_MSI_ENABLE 0x8000
enum {
@ -62,7 +62,7 @@ enum {
#define PMC_GLOBAL_INT_BIT0 0x00000001
#ifndef AAC_DRIVER_BUILD
# define AAC_DRIVER_BUILD 40709
# define AAC_DRIVER_BUILD 41010
# define AAC_DRIVER_BRANCH "-ms"
#endif
#define MAXIMUM_NUM_CONTAINERS 32
@ -547,6 +547,7 @@ struct adapter_ops
int (*adapter_sync_cmd)(struct aac_dev *dev, u32 command, u32 p1, u32 p2, u32 p3, u32 p4, u32 p5, u32 p6, u32 *status, u32 *r1, u32 *r2, u32 *r3, u32 *r4);
int (*adapter_check_health)(struct aac_dev *dev);
int (*adapter_restart)(struct aac_dev *dev, int bled);
void (*adapter_start)(struct aac_dev *dev);
/* Transport operations */
int (*adapter_ioremap)(struct aac_dev * dev, u32 size);
irq_handler_t adapter_intr;
@ -843,6 +844,10 @@ struct src_registers {
&((AEP)->regs.src.bar0->CSR))
#define src_writel(AEP, CSR, value) writel(value, \
&((AEP)->regs.src.bar0->CSR))
#if defined(writeq)
#define src_writeq(AEP, CSR, value) writeq(value, \
&((AEP)->regs.src.bar0->CSR))
#endif
#define SRC_ODR_SHIFT 12
#define SRC_IDR_SHIFT 9
@ -1162,6 +1167,11 @@ struct aac_dev
struct fsa_dev_info *fsa_dev;
struct task_struct *thread;
int cardtype;
/*
*This lock will protect the two 32-bit
*writes to the Inbound Queue
*/
spinlock_t iq_lock;
/*
* The following is the device specific extension.
@ -1247,6 +1257,9 @@ struct aac_dev
#define aac_adapter_restart(dev,bled) \
(dev)->a_ops.adapter_restart(dev,bled)
#define aac_adapter_start(dev) \
((dev)->a_ops.adapter_start(dev))
#define aac_adapter_ioremap(dev, size) \
(dev)->a_ops.adapter_ioremap(dev, size)
@ -2097,6 +2110,8 @@ static inline unsigned int cap_to_cyls(sector_t capacity, unsigned divisor)
#define AAC_OWNER_ERROR_HANDLER 0x103
#define AAC_OWNER_FIRMWARE 0x106
int aac_acquire_irq(struct aac_dev *dev);
void aac_free_irq(struct aac_dev *dev);
const char *aac_driverinfo(struct Scsi_Host *);
struct fib *aac_fib_alloc(struct aac_dev *dev);
int aac_fib_setup(struct aac_dev *dev);
@ -2127,6 +2142,7 @@ int aac_sa_init(struct aac_dev *dev);
int aac_src_init(struct aac_dev *dev);
int aac_srcv_init(struct aac_dev *dev);
int aac_queue_get(struct aac_dev * dev, u32 * index, u32 qid, struct hw_fib * hw_fib, int wait, struct fib * fibptr, unsigned long *nonotify);
void aac_define_int_mode(struct aac_dev *dev);
unsigned int aac_response_normal(struct aac_queue * q);
unsigned int aac_command_normal(struct aac_queue * q);
unsigned int aac_intr_normal(struct aac_dev *dev, u32 Index,

View File

@ -43,8 +43,6 @@
#include "aacraid.h"
static void aac_define_int_mode(struct aac_dev *dev);
struct aac_common aac_config = {
.irq_mod = 1
};
@ -338,6 +336,74 @@ static int aac_comm_init(struct aac_dev * dev)
return 0;
}
void aac_define_int_mode(struct aac_dev *dev)
{
int i, msi_count, min_msix;
msi_count = i = 0;
/* max. vectors from GET_COMM_PREFERRED_SETTINGS */
if (dev->max_msix == 0 ||
dev->pdev->device == PMC_DEVICE_S6 ||
dev->sync_mode) {
dev->max_msix = 1;
dev->vector_cap =
dev->scsi_host_ptr->can_queue +
AAC_NUM_MGT_FIB;
return;
}
/* Don't bother allocating more MSI-X vectors than cpus */
msi_count = min(dev->max_msix,
(unsigned int)num_online_cpus());
dev->max_msix = msi_count;
if (msi_count > AAC_MAX_MSIX)
msi_count = AAC_MAX_MSIX;
for (i = 0; i < msi_count; i++)
dev->msixentry[i].entry = i;
if (msi_count > 1 &&
pci_find_capability(dev->pdev, PCI_CAP_ID_MSIX)) {
min_msix = 2;
i = pci_enable_msix_range(dev->pdev,
dev->msixentry,
min_msix,
msi_count);
if (i > 0) {
dev->msi_enabled = 1;
msi_count = i;
} else {
dev->msi_enabled = 0;
printk(KERN_ERR "%s%d: MSIX not supported!! Will try MSI 0x%x.\n",
dev->name, dev->id, i);
}
}
if (!dev->msi_enabled) {
msi_count = 1;
i = pci_enable_msi(dev->pdev);
if (!i) {
dev->msi_enabled = 1;
dev->msi = 1;
} else {
printk(KERN_ERR "%s%d: MSI not supported!! Will try INTx 0x%x.\n",
dev->name, dev->id, i);
}
}
if (!dev->msi_enabled)
dev->max_msix = msi_count = 1;
else {
if (dev->max_msix > msi_count)
dev->max_msix = msi_count;
}
dev->vector_cap =
(dev->scsi_host_ptr->can_queue + AAC_NUM_MGT_FIB) /
msi_count;
}
struct aac_dev *aac_init_adapter(struct aac_dev *dev)
{
u32 status[5];
@ -350,6 +416,7 @@ struct aac_dev *aac_init_adapter(struct aac_dev *dev)
dev->management_fib_count = 0;
spin_lock_init(&dev->manage_lock);
spin_lock_init(&dev->sync_lock);
spin_lock_init(&dev->iq_lock);
dev->max_fib_size = sizeof(struct hw_fib);
dev->sg_tablesize = host->sg_tablesize = (dev->max_fib_size
- sizeof(struct aac_fibhdr)
@ -508,79 +575,3 @@ struct aac_dev *aac_init_adapter(struct aac_dev *dev)
return dev;
}
static void aac_define_int_mode(struct aac_dev *dev)
{
int i, msi_count;
msi_count = i = 0;
/* max. vectors from GET_COMM_PREFERRED_SETTINGS */
if (dev->max_msix == 0 ||
dev->pdev->device == PMC_DEVICE_S6 ||
dev->sync_mode) {
dev->max_msix = 1;
dev->vector_cap =
dev->scsi_host_ptr->can_queue +
AAC_NUM_MGT_FIB;
return;
}
msi_count = min(dev->max_msix,
(unsigned int)num_online_cpus());
dev->max_msix = msi_count;
if (msi_count > AAC_MAX_MSIX)
msi_count = AAC_MAX_MSIX;
for (i = 0; i < msi_count; i++)
dev->msixentry[i].entry = i;
if (msi_count > 1 &&
pci_find_capability(dev->pdev, PCI_CAP_ID_MSIX)) {
i = pci_enable_msix(dev->pdev,
dev->msixentry,
msi_count);
/* Check how many MSIX vectors are allocated */
if (i >= 0) {
dev->msi_enabled = 1;
if (i) {
msi_count = i;
if (pci_enable_msix(dev->pdev,
dev->msixentry,
msi_count)) {
dev->msi_enabled = 0;
printk(KERN_ERR "%s%d: MSIX not supported!! Will try MSI 0x%x.\n",
dev->name, dev->id, i);
}
}
} else {
dev->msi_enabled = 0;
printk(KERN_ERR "%s%d: MSIX not supported!! Will try MSI 0x%x.\n",
dev->name, dev->id, i);
}
}
if (!dev->msi_enabled) {
msi_count = 1;
i = pci_enable_msi(dev->pdev);
if (!i) {
dev->msi_enabled = 1;
dev->msi = 1;
} else {
printk(KERN_ERR "%s%d: MSI not supported!! Will try INTx 0x%x.\n",
dev->name, dev->id, i);
}
}
if (!dev->msi_enabled)
dev->max_msix = msi_count = 1;
else {
if (dev->max_msix > msi_count)
dev->max_msix = msi_count;
}
dev->vector_cap =
(dev->scsi_host_ptr->can_queue + AAC_NUM_MGT_FIB) /
msi_count;
}

View File

@ -1270,13 +1270,12 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
static int _aac_reset_adapter(struct aac_dev *aac, int forced)
{
int index, quirks;
int retval, i;
int retval;
struct Scsi_Host *host;
struct scsi_device *dev;
struct scsi_cmnd *command;
struct scsi_cmnd *command_list;
int jafo = 0;
int cpu;
/*
* Assumptions:
@ -1339,35 +1338,7 @@ static int _aac_reset_adapter(struct aac_dev *aac, int forced)
aac->comm_phys = 0;
kfree(aac->queues);
aac->queues = NULL;
cpu = cpumask_first(cpu_online_mask);
if (aac->pdev->device == PMC_DEVICE_S6 ||
aac->pdev->device == PMC_DEVICE_S7 ||
aac->pdev->device == PMC_DEVICE_S8 ||
aac->pdev->device == PMC_DEVICE_S9) {
if (aac->max_msix > 1) {
for (i = 0; i < aac->max_msix; i++) {
if (irq_set_affinity_hint(
aac->msixentry[i].vector,
NULL)) {
printk(KERN_ERR "%s%d: Failed to reset IRQ affinity for cpu %d\n",
aac->name,
aac->id,
cpu);
}
cpu = cpumask_next(cpu,
cpu_online_mask);
free_irq(aac->msixentry[i].vector,
&(aac->aac_msix[i]));
}
pci_disable_msix(aac->pdev);
} else {
free_irq(aac->pdev->irq, &(aac->aac_msix[0]));
}
} else {
free_irq(aac->pdev->irq, aac);
}
if (aac->msi)
pci_disable_msi(aac->pdev);
aac_free_irq(aac);
kfree(aac->fsa_dev);
aac->fsa_dev = NULL;
quirks = aac_get_driver_ident(index)->quirks;
@ -1978,3 +1949,83 @@ int aac_command_thread(void *data)
dev->aif_thread = 0;
return 0;
}
int aac_acquire_irq(struct aac_dev *dev)
{
int i;
int j;
int ret = 0;
int cpu;
cpu = cpumask_first(cpu_online_mask);
if (!dev->sync_mode && dev->msi_enabled && dev->max_msix > 1) {
for (i = 0; i < dev->max_msix; i++) {
dev->aac_msix[i].vector_no = i;
dev->aac_msix[i].dev = dev;
if (request_irq(dev->msixentry[i].vector,
dev->a_ops.adapter_intr,
0, "aacraid", &(dev->aac_msix[i]))) {
printk(KERN_ERR "%s%d: Failed to register IRQ for vector %d.\n",
dev->name, dev->id, i);
for (j = 0 ; j < i ; j++)
free_irq(dev->msixentry[j].vector,
&(dev->aac_msix[j]));
pci_disable_msix(dev->pdev);
ret = -1;
}
if (irq_set_affinity_hint(dev->msixentry[i].vector,
get_cpu_mask(cpu))) {
printk(KERN_ERR "%s%d: Failed to set IRQ affinity for cpu %d\n",
dev->name, dev->id, cpu);
}
cpu = cpumask_next(cpu, cpu_online_mask);
}
} else {
dev->aac_msix[0].vector_no = 0;
dev->aac_msix[0].dev = dev;
if (request_irq(dev->pdev->irq, dev->a_ops.adapter_intr,
IRQF_SHARED, "aacraid",
&(dev->aac_msix[0])) < 0) {
if (dev->msi)
pci_disable_msi(dev->pdev);
printk(KERN_ERR "%s%d: Interrupt unavailable.\n",
dev->name, dev->id);
ret = -1;
}
}
return ret;
}
void aac_free_irq(struct aac_dev *dev)
{
int i;
int cpu;
cpu = cpumask_first(cpu_online_mask);
if (dev->pdev->device == PMC_DEVICE_S6 ||
dev->pdev->device == PMC_DEVICE_S7 ||
dev->pdev->device == PMC_DEVICE_S8 ||
dev->pdev->device == PMC_DEVICE_S9) {
if (dev->max_msix > 1) {
for (i = 0; i < dev->max_msix; i++) {
if (irq_set_affinity_hint(
dev->msixentry[i].vector, NULL)) {
printk(KERN_ERR "%s%d: Failed to reset IRQ affinity for cpu %d\n",
dev->name, dev->id, cpu);
}
cpu = cpumask_next(cpu, cpu_online_mask);
free_irq(dev->msixentry[i].vector,
&(dev->aac_msix[i]));
}
} else {
free_irq(dev->pdev->irq, &(dev->aac_msix[0]));
}
} else {
free_irq(dev->pdev->irq, dev);
}
if (dev->msi)
pci_disable_msi(dev->pdev);
else if (dev->max_msix > 1)
pci_disable_msix(dev->pdev);
}

View File

@ -1317,6 +1317,154 @@ static int aac_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
return error;
}
#if (defined(CONFIG_PM))
void aac_release_resources(struct aac_dev *aac)
{
int i;
aac_adapter_disable_int(aac);
if (aac->pdev->device == PMC_DEVICE_S6 ||
aac->pdev->device == PMC_DEVICE_S7 ||
aac->pdev->device == PMC_DEVICE_S8 ||
aac->pdev->device == PMC_DEVICE_S9) {
if (aac->max_msix > 1) {
for (i = 0; i < aac->max_msix; i++)
free_irq(aac->msixentry[i].vector,
&(aac->aac_msix[i]));
} else {
free_irq(aac->pdev->irq, &(aac->aac_msix[0]));
}
} else {
free_irq(aac->pdev->irq, aac);
}
if (aac->msi)
pci_disable_msi(aac->pdev);
else if (aac->max_msix > 1)
pci_disable_msix(aac->pdev);
}
static int aac_acquire_resources(struct aac_dev *dev)
{
int i, j;
int instance = dev->id;
const char *name = dev->name;
unsigned long status;
/*
* First clear out all interrupts. Then enable the one's that we
* can handle.
*/
while (!((status = src_readl(dev, MUnit.OMR)) & KERNEL_UP_AND_RUNNING)
|| status == 0xffffffff)
msleep(20);
aac_adapter_disable_int(dev);
aac_adapter_enable_int(dev);
if ((dev->pdev->device == PMC_DEVICE_S7 ||
dev->pdev->device == PMC_DEVICE_S8 ||
dev->pdev->device == PMC_DEVICE_S9))
aac_define_int_mode(dev);
if (dev->msi_enabled)
aac_src_access_devreg(dev, AAC_ENABLE_MSIX);
if (!dev->sync_mode && dev->msi_enabled && dev->max_msix > 1) {
for (i = 0; i < dev->max_msix; i++) {
dev->aac_msix[i].vector_no = i;
dev->aac_msix[i].dev = dev;
if (request_irq(dev->msixentry[i].vector,
dev->a_ops.adapter_intr,
0, "aacraid", &(dev->aac_msix[i]))) {
printk(KERN_ERR "%s%d: Failed to register IRQ for vector %d.\n",
name, instance, i);
for (j = 0 ; j < i ; j++)
free_irq(dev->msixentry[j].vector,
&(dev->aac_msix[j]));
pci_disable_msix(dev->pdev);
goto error_iounmap;
}
}
} else {
dev->aac_msix[0].vector_no = 0;
dev->aac_msix[0].dev = dev;
if (request_irq(dev->pdev->irq, dev->a_ops.adapter_intr,
IRQF_SHARED, "aacraid",
&(dev->aac_msix[0])) < 0) {
if (dev->msi)
pci_disable_msi(dev->pdev);
printk(KERN_ERR "%s%d: Interrupt unavailable.\n",
name, instance);
goto error_iounmap;
}
}
aac_adapter_enable_int(dev);
if (!dev->sync_mode)
aac_adapter_start(dev);
return 0;
error_iounmap:
return -1;
}
static int aac_suspend(struct pci_dev *pdev, pm_message_t state)
{
struct Scsi_Host *shost = pci_get_drvdata(pdev);
struct aac_dev *aac = (struct aac_dev *)shost->hostdata;
scsi_block_requests(shost);
aac_send_shutdown(aac);
aac_release_resources(aac);
pci_set_drvdata(pdev, shost);
pci_save_state(pdev);
pci_disable_device(pdev);
pci_set_power_state(pdev, pci_choose_state(pdev, state));
return 0;
}
static int aac_resume(struct pci_dev *pdev)
{
struct Scsi_Host *shost = pci_get_drvdata(pdev);
struct aac_dev *aac = (struct aac_dev *)shost->hostdata;
int r;
pci_set_power_state(pdev, PCI_D0);
pci_enable_wake(pdev, PCI_D0, 0);
pci_restore_state(pdev);
r = pci_enable_device(pdev);
if (r)
goto fail_device;
pci_set_master(pdev);
if (aac_acquire_resources(aac))
goto fail_device;
/*
* reset this flag to unblock ioctl() as it was set at
* aac_send_shutdown() to block ioctls from upperlayer
*/
aac->adapter_shutdown = 0;
scsi_unblock_requests(shost);
return 0;
fail_device:
printk(KERN_INFO "%s%d: resume failed.\n", aac->name, aac->id);
scsi_host_put(shost);
pci_disable_device(pdev);
return -ENODEV;
}
#endif
static void aac_shutdown(struct pci_dev *dev)
{
struct Scsi_Host *shost = pci_get_drvdata(dev);
@ -1356,6 +1504,10 @@ static struct pci_driver aac_pci_driver = {
.id_table = aac_pci_tbl,
.probe = aac_probe_one,
.remove = aac_remove_one,
#if (defined(CONFIG_PM))
.suspend = aac_suspend,
.resume = aac_resume,
#endif
.shutdown = aac_shutdown,
};

View File

@ -623,6 +623,7 @@ int _aac_rx_init(struct aac_dev *dev)
dev->a_ops.adapter_sync_cmd = rx_sync_cmd;
dev->a_ops.adapter_check_health = aac_rx_check_health;
dev->a_ops.adapter_restart = aac_rx_restart_adapter;
dev->a_ops.adapter_start = aac_rx_start_adapter;
/*
* First clear out all interrupts. Then enable the one's that we

View File

@ -372,6 +372,7 @@ int aac_sa_init(struct aac_dev *dev)
dev->a_ops.adapter_sync_cmd = sa_sync_cmd;
dev->a_ops.adapter_check_health = aac_sa_check_health;
dev->a_ops.adapter_restart = aac_sa_restart_adapter;
dev->a_ops.adapter_start = aac_sa_start_adapter;
dev->a_ops.adapter_intr = aac_sa_intr;
dev->a_ops.adapter_deliver = aac_rx_deliver_producer;
dev->a_ops.adapter_ioremap = aac_sa_ioremap;

View File

@ -447,6 +447,10 @@ static int aac_src_deliver_message(struct fib *fib)
u32 fibsize;
dma_addr_t address;
struct aac_fib_xporthdr *pFibX;
#if !defined(writeq)
unsigned long flags;
#endif
u16 hdr_size = le16_to_cpu(fib->hw_fib_va->header.Size);
atomic_inc(&q->numpending);
@ -511,10 +515,14 @@ static int aac_src_deliver_message(struct fib *fib)
return -EINVAL;
address |= fibsize;
}
#if defined(writeq)
src_writeq(dev, MUnit.IQ_L, (u64)address);
#else
spin_lock_irqsave(&fib->dev->iq_lock, flags);
src_writel(dev, MUnit.IQ_H, upper_32_bits(address) & 0xffffffff);
src_writel(dev, MUnit.IQ_L, address & 0xffffffff);
spin_unlock_irqrestore(&fib->dev->iq_lock, flags);
#endif
return 0;
}
@ -726,6 +734,7 @@ int aac_src_init(struct aac_dev *dev)
dev->a_ops.adapter_sync_cmd = src_sync_cmd;
dev->a_ops.adapter_check_health = aac_src_check_health;
dev->a_ops.adapter_restart = aac_src_restart_adapter;
dev->a_ops.adapter_start = aac_src_start_adapter;
/*
* First clear out all interrupts. Then enable the one's that we
@ -741,7 +750,7 @@ int aac_src_init(struct aac_dev *dev)
if (dev->comm_interface != AAC_COMM_MESSAGE_TYPE1)
goto error_iounmap;
dev->msi = aac_msi && !pci_enable_msi(dev->pdev);
dev->msi = !pci_enable_msi(dev->pdev);
dev->aac_msix[0].vector_no = 0;
dev->aac_msix[0].dev = dev;
@ -789,9 +798,7 @@ int aac_srcv_init(struct aac_dev *dev)
unsigned long status;
int restart = 0;
int instance = dev->id;
int i, j;
const char *name = dev->name;
int cpu;
dev->a_ops.adapter_ioremap = aac_srcv_ioremap;
dev->a_ops.adapter_comm = aac_src_select_comm;
@ -892,6 +899,7 @@ int aac_srcv_init(struct aac_dev *dev)
dev->a_ops.adapter_sync_cmd = src_sync_cmd;
dev->a_ops.adapter_check_health = aac_src_check_health;
dev->a_ops.adapter_restart = aac_src_restart_adapter;
dev->a_ops.adapter_start = aac_src_start_adapter;
/*
* First clear out all interrupts. Then enable the one's that we
@ -908,48 +916,10 @@ int aac_srcv_init(struct aac_dev *dev)
goto error_iounmap;
if (dev->msi_enabled)
aac_src_access_devreg(dev, AAC_ENABLE_MSIX);
if (!dev->sync_mode && dev->msi_enabled && dev->max_msix > 1) {
cpu = cpumask_first(cpu_online_mask);
for (i = 0; i < dev->max_msix; i++) {
dev->aac_msix[i].vector_no = i;
dev->aac_msix[i].dev = dev;
if (request_irq(dev->msixentry[i].vector,
dev->a_ops.adapter_intr,
0,
"aacraid",
&(dev->aac_msix[i]))) {
printk(KERN_ERR "%s%d: Failed to register IRQ for vector %d.\n",
name, instance, i);
for (j = 0 ; j < i ; j++)
free_irq(dev->msixentry[j].vector,
&(dev->aac_msix[j]));
pci_disable_msix(dev->pdev);
goto error_iounmap;
}
if (irq_set_affinity_hint(
dev->msixentry[i].vector,
get_cpu_mask(cpu))) {
printk(KERN_ERR "%s%d: Failed to set IRQ affinity for cpu %d\n",
name, instance, cpu);
}
cpu = cpumask_next(cpu, cpu_online_mask);
}
} else {
dev->aac_msix[0].vector_no = 0;
dev->aac_msix[0].dev = dev;
if (aac_acquire_irq(dev))
goto error_iounmap;
if (request_irq(dev->pdev->irq, dev->a_ops.adapter_intr,
IRQF_SHARED,
"aacraid",
&(dev->aac_msix[0])) < 0) {
if (dev->msi)
pci_disable_msi(dev->pdev);
printk(KERN_ERR "%s%d: Interrupt unavailable.\n",
name, instance);
goto error_iounmap;
}
}
dev->dbg_base = dev->base_start;
dev->dbg_base_mapped = dev->base;
dev->dbg_size = dev->base_size;

View File

@ -10819,7 +10819,6 @@ static struct scsi_host_template advansys_template = {
* by enabling clustering, I/O throughput increases as well.
*/
.use_clustering = ENABLE_CLUSTERING,
.use_blk_tags = 1,
};
static int advansys_wide_init_chip(struct Scsi_Host *shost)
@ -11211,11 +11210,6 @@ static int advansys_board_found(struct Scsi_Host *shost, unsigned int iop,
/* Set maximum number of queues the adapter can handle. */
shost->can_queue = adv_dvc_varp->max_host_qng;
}
ret = scsi_init_shared_tag_map(shost, shost->can_queue);
if (ret) {
shost_printk(KERN_ERR, shost, "init tag map failed\n");
goto err_free_dma;
}
/*
* Set the maximum number of scatter-gather elements the

View File

@ -925,7 +925,6 @@ struct scsi_host_template aic79xx_driver_template = {
.slave_configure = ahd_linux_slave_configure,
.target_alloc = ahd_linux_target_alloc,
.target_destroy = ahd_linux_target_destroy,
.use_blk_tags = 1,
};
/******************************** Bus DMA *************************************/

View File

@ -812,7 +812,6 @@ struct scsi_host_template aic7xxx_driver_template = {
.slave_configure = ahc_linux_slave_configure,
.target_alloc = ahc_linux_target_alloc,
.target_destroy = ahc_linux_target_destroy,
.use_blk_tags = 1,
};
/**************************** Tasklet Handler *********************************/

View File

@ -73,7 +73,6 @@ static struct scsi_host_template aic94xx_sht = {
.eh_bus_reset_handler = sas_eh_bus_reset_handler,
.target_destroy = sas_target_destroy,
.ioctl = sas_ioctl,
.use_blk_tags = 1,
.track_queue_depth = 1,
};
@ -704,10 +703,10 @@ static int asd_unregister_sas_ha(struct asd_ha_struct *asd_ha)
{
int err;
scsi_remove_host(asd_ha->sas_ha.core.shost);
err = sas_unregister_ha(&asd_ha->sas_ha);
sas_remove_host(asd_ha->sas_ha.core.shost);
scsi_remove_host(asd_ha->sas_ha.core.shost);
scsi_host_put(asd_ha->sas_ha.core.shost);
kfree(asd_ha->sas_ha.sas_phy);

View File

@ -1198,14 +1198,16 @@ free_io_sgl_handle(struct beiscsi_hba *phba, struct sgl_handle *psgl_handle)
* alloc_wrb_handle - To allocate a wrb handle
* @phba: The hba pointer
* @cid: The cid to use for allocation
* @pwrb_context: ptr to ptr to wrb context
*
* This happens under session_lock until submission to chip
*/
struct wrb_handle *alloc_wrb_handle(struct beiscsi_hba *phba, unsigned int cid)
struct wrb_handle *alloc_wrb_handle(struct beiscsi_hba *phba, unsigned int cid,
struct hwi_wrb_context **pcontext)
{
struct hwi_wrb_context *pwrb_context;
struct hwi_controller *phwi_ctrlr;
struct wrb_handle *pwrb_handle, *pwrb_handle_tmp;
struct wrb_handle *pwrb_handle;
uint16_t cri_index = BE_GET_CRI_FROM_CID(cid);
phwi_ctrlr = phba->phwi_ctrlr;
@ -1219,9 +1221,9 @@ struct wrb_handle *alloc_wrb_handle(struct beiscsi_hba *phba, unsigned int cid)
pwrb_context->alloc_index = 0;
else
pwrb_context->alloc_index++;
pwrb_handle_tmp = pwrb_context->pwrb_handle_base[
pwrb_context->alloc_index];
pwrb_handle->nxt_wrb_index = pwrb_handle_tmp->wrb_index;
/* Return the context address */
*pcontext = pwrb_context;
} else
pwrb_handle = NULL;
return pwrb_handle;
@ -3184,7 +3186,7 @@ be_sgl_create_contiguous(void *virtual_address,
{
WARN_ON(!virtual_address);
WARN_ON(!physical_address);
WARN_ON(!length > 0);
WARN_ON(!length);
WARN_ON(!sgl);
sgl->va = virtual_address;
@ -4678,6 +4680,7 @@ beiscsi_offload_connection(struct beiscsi_conn *beiscsi_conn,
struct beiscsi_offload_params *params)
{
struct wrb_handle *pwrb_handle;
struct hwi_wrb_context *pwrb_context = NULL;
struct beiscsi_hba *phba = beiscsi_conn->phba;
struct iscsi_task *task = beiscsi_conn->task;
struct iscsi_session *session = task->conn->session;
@ -4692,14 +4695,17 @@ beiscsi_offload_connection(struct beiscsi_conn *beiscsi_conn,
beiscsi_cleanup_task(task);
spin_unlock_bh(&session->back_lock);
pwrb_handle = alloc_wrb_handle(phba, beiscsi_conn->beiscsi_conn_cid);
pwrb_handle = alloc_wrb_handle(phba, beiscsi_conn->beiscsi_conn_cid,
&pwrb_context);
/* Check for the adapter family */
if (is_chip_be2_be3r(phba))
beiscsi_offload_cxn_v0(params, pwrb_handle,
phba->init_mem);
phba->init_mem,
pwrb_context);
else
beiscsi_offload_cxn_v2(params, pwrb_handle);
beiscsi_offload_cxn_v2(params, pwrb_handle,
pwrb_context);
be_dws_le_to_cpu(pwrb_handle->pwrb,
sizeof(struct iscsi_target_context_update_wrb));
@ -4769,7 +4775,8 @@ static int beiscsi_alloc_pdu(struct iscsi_task *task, uint8_t opcode)
goto free_hndls;
}
io_task->pwrb_handle = alloc_wrb_handle(phba,
beiscsi_conn->beiscsi_conn_cid);
beiscsi_conn->beiscsi_conn_cid,
&io_task->pwrb_context);
if (!io_task->pwrb_handle) {
beiscsi_log(phba, KERN_ERR,
BEISCSI_LOG_IO | BEISCSI_LOG_CONFIG,
@ -4803,7 +4810,8 @@ static int beiscsi_alloc_pdu(struct iscsi_task *task, uint8_t opcode)
io_task->psgl_handle;
io_task->pwrb_handle =
alloc_wrb_handle(phba,
beiscsi_conn->beiscsi_conn_cid);
beiscsi_conn->beiscsi_conn_cid,
&io_task->pwrb_context);
if (!io_task->pwrb_handle) {
beiscsi_log(phba, KERN_ERR,
BEISCSI_LOG_IO |
@ -4839,7 +4847,8 @@ static int beiscsi_alloc_pdu(struct iscsi_task *task, uint8_t opcode)
}
io_task->pwrb_handle =
alloc_wrb_handle(phba,
beiscsi_conn->beiscsi_conn_cid);
beiscsi_conn->beiscsi_conn_cid,
&io_task->pwrb_context);
if (!io_task->pwrb_handle) {
beiscsi_log(phba, KERN_ERR,
BEISCSI_LOG_IO | BEISCSI_LOG_CONFIG,
@ -4925,7 +4934,12 @@ int beiscsi_iotask_v2(struct iscsi_task *task, struct scatterlist *sg,
hwi_write_sgl_v2(pwrb, sg, num_sg, io_task);
AMAP_SET_BITS(struct amap_iscsi_wrb_v2, ptr2nextwrb, pwrb,
io_task->pwrb_handle->nxt_wrb_index);
io_task->pwrb_handle->wrb_index);
if (io_task->pwrb_context->plast_wrb)
AMAP_SET_BITS(struct amap_iscsi_wrb_v2, ptr2nextwrb,
io_task->pwrb_context->plast_wrb,
io_task->pwrb_handle->wrb_index);
io_task->pwrb_context->plast_wrb = pwrb;
be_dws_le_to_cpu(pwrb, sizeof(struct iscsi_wrb));
@ -4982,7 +4996,13 @@ static int beiscsi_iotask(struct iscsi_task *task, struct scatterlist *sg,
hwi_write_sgl(pwrb, sg, num_sg, io_task);
AMAP_SET_BITS(struct amap_iscsi_wrb, ptr2nextwrb, pwrb,
io_task->pwrb_handle->nxt_wrb_index);
io_task->pwrb_handle->wrb_index);
if (io_task->pwrb_context->plast_wrb)
AMAP_SET_BITS(struct amap_iscsi_wrb, ptr2nextwrb,
io_task->pwrb_context->plast_wrb,
io_task->pwrb_handle->wrb_index);
io_task->pwrb_context->plast_wrb = pwrb;
be_dws_le_to_cpu(pwrb, sizeof(struct iscsi_wrb));
doorbell |= beiscsi_conn->beiscsi_conn_cid & DB_WRB_POST_CID_MASK;
@ -5020,7 +5040,13 @@ static int beiscsi_mtask(struct iscsi_task *task)
AMAP_SET_BITS(struct amap_iscsi_wrb, r2t_exp_dtl, pwrb,
task->data_count);
AMAP_SET_BITS(struct amap_iscsi_wrb, ptr2nextwrb, pwrb,
io_task->pwrb_handle->nxt_wrb_index);
io_task->pwrb_handle->wrb_index);
if (io_task->pwrb_context->plast_wrb)
AMAP_SET_BITS(struct amap_iscsi_wrb, ptr2nextwrb,
io_task->pwrb_context->plast_wrb,
io_task->pwrb_handle->wrb_index);
io_task->pwrb_context->plast_wrb = pwrb;
pwrb_typeoffset = BE_WRB_TYPE_OFFSET;
} else {
AMAP_SET_BITS(struct amap_iscsi_wrb_v2, cmdsn_itt, pwrb,
@ -5032,7 +5058,13 @@ static int beiscsi_mtask(struct iscsi_task *task)
AMAP_SET_BITS(struct amap_iscsi_wrb_v2, r2t_exp_dtl, pwrb,
task->data_count);
AMAP_SET_BITS(struct amap_iscsi_wrb_v2, ptr2nextwrb, pwrb,
io_task->pwrb_handle->nxt_wrb_index);
io_task->pwrb_handle->wrb_index);
if (io_task->pwrb_context->plast_wrb)
AMAP_SET_BITS(struct amap_iscsi_wrb_v2, ptr2nextwrb,
io_task->pwrb_context->plast_wrb,
io_task->pwrb_handle->wrb_index);
io_task->pwrb_context->plast_wrb = pwrb;
pwrb_typeoffset = SKH_WRB_TYPE_OFFSET;
}

View File

@ -36,7 +36,7 @@
#include <scsi/scsi_transport_iscsi.h>
#define DRV_NAME "be2iscsi"
#define BUILD_STR "10.6.0.0"
#define BUILD_STR "10.6.0.1"
#define BE_NAME "Emulex OneConnect" \
"Open-iSCSI Driver version" BUILD_STR
#define DRV_DESC BE_NAME " " "Driver"
@ -502,6 +502,7 @@ struct beiscsi_io_task {
struct sgl_handle *psgl_handle;
struct beiscsi_conn *conn;
struct scsi_cmnd *scsi_cmnd;
struct hwi_wrb_context *pwrb_context;
unsigned int cmd_sn;
unsigned int flags;
unsigned short cid;
@ -833,7 +834,8 @@ struct amap_iscsi_wrb_v2 {
} __packed;
struct wrb_handle *alloc_wrb_handle(struct beiscsi_hba *phba, unsigned int cid);
struct wrb_handle *alloc_wrb_handle(struct beiscsi_hba *phba, unsigned int cid,
struct hwi_wrb_context **pcontext);
void
free_mgmt_sgl_handle(struct beiscsi_hba *phba, struct sgl_handle *psgl_handle);
@ -1044,7 +1046,6 @@ enum hwh_type_enum {
struct wrb_handle {
enum hwh_type_enum type;
unsigned short wrb_index;
unsigned short nxt_wrb_index;
struct iscsi_task *pio_handle;
struct iscsi_wrb *pwrb;

View File

@ -1573,7 +1573,8 @@ beiscsi_phys_port_disp(struct device *dev, struct device_attribute *attr,
void beiscsi_offload_cxn_v0(struct beiscsi_offload_params *params,
struct wrb_handle *pwrb_handle,
struct be_mem_descriptor *mem_descr)
struct be_mem_descriptor *mem_descr,
struct hwi_wrb_context *pwrb_context)
{
struct iscsi_wrb *pwrb = pwrb_handle->pwrb;
@ -1617,7 +1618,14 @@ void beiscsi_offload_cxn_v0(struct beiscsi_offload_params *params,
max_burst_length) / 32]);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb, ptr2nextwrb,
pwrb, pwrb_handle->nxt_wrb_index);
pwrb, pwrb_handle->wrb_index);
if (pwrb_context->plast_wrb)
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb,
ptr2nextwrb,
pwrb_context->plast_wrb,
pwrb_handle->wrb_index);
pwrb_context->plast_wrb = pwrb;
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb,
session_state, pwrb, 0);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb, compltonack,
@ -1637,7 +1645,8 @@ void beiscsi_offload_cxn_v0(struct beiscsi_offload_params *params,
}
void beiscsi_offload_cxn_v2(struct beiscsi_offload_params *params,
struct wrb_handle *pwrb_handle)
struct wrb_handle *pwrb_handle,
struct hwi_wrb_context *pwrb_context)
{
struct iscsi_wrb *pwrb = pwrb_handle->pwrb;
@ -1652,7 +1661,14 @@ void beiscsi_offload_cxn_v2(struct beiscsi_offload_params *params,
BE_TGT_CTX_UPDT_CMD);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb_v2,
ptr2nextwrb,
pwrb, pwrb_handle->nxt_wrb_index);
pwrb, pwrb_handle->wrb_index);
if (pwrb_context->plast_wrb)
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb_v2,
ptr2nextwrb,
pwrb_context->plast_wrb,
pwrb_handle->wrb_index);
pwrb_context->plast_wrb = pwrb;
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb_v2, wrb_idx,
pwrb, pwrb_handle->wrb_index);
AMAP_SET_BITS(struct amap_iscsi_target_context_update_wrb_v2,

View File

@ -330,10 +330,13 @@ ssize_t beiscsi_phys_port_disp(struct device *dev,
void beiscsi_offload_cxn_v0(struct beiscsi_offload_params *params,
struct wrb_handle *pwrb_handle,
struct be_mem_descriptor *mem_descr);
struct be_mem_descriptor *mem_descr,
struct hwi_wrb_context *pwrb_context);
void beiscsi_offload_cxn_v2(struct beiscsi_offload_params *params,
struct wrb_handle *pwrb_handle);
struct wrb_handle *pwrb_handle,
struct hwi_wrb_context *pwrb_context);
void beiscsi_ue_detect(struct beiscsi_hba *phba);
int be_cmd_modify_eq_delay(struct beiscsi_hba *phba,
struct be_set_eqd *, int num);

View File

@ -800,7 +800,6 @@ struct scsi_host_template bfad_im_scsi_host_template = {
.shost_attrs = bfad_im_host_attrs,
.max_sectors = BFAD_MAX_SECTORS,
.vendor_id = BFA_PCI_VENDOR_ID_BROCADE,
.use_blk_tags = 1,
};
struct scsi_host_template bfad_im_vport_template = {
@ -822,7 +821,6 @@ struct scsi_host_template bfad_im_vport_template = {
.use_clustering = ENABLE_CLUSTERING,
.shost_attrs = bfad_im_vport_attrs,
.max_sectors = BFAD_MAX_SECTORS,
.use_blk_tags = 1,
};
bfa_status_t

View File

@ -1,9 +1,9 @@
/* 57xx_hsi_bnx2fc.h: QLogic NetXtreme II Linux FCoE offload driver.
/* 57xx_hsi_bnx2fc.h: QLogic Linux FCoE offload driver.
* Handles operations such as session offload/upload etc, and manages
* session resources such as connection id and qp resources.
*
* Copyright (c) 2008 - 2013 Broadcom Corporation
* Copyright (c) 2014, QLogic Corporation
* Copyright (c) 2008-2013 Broadcom Corporation
* Copyright (c) 2014-2015 QLogic Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by

View File

@ -1,5 +1,5 @@
config SCSI_BNX2X_FCOE
tristate "QLogic NetXtreme II FCoE support"
tristate "QLogic FCoE offload support"
depends on PCI
depends on (IPV6 || IPV6=n)
depends on LIBFC
@ -9,5 +9,4 @@ config SCSI_BNX2X_FCOE
select NET_VENDOR_BROADCOM
select CNIC
---help---
This driver supports FCoE offload for the QLogic NetXtreme II
devices.
This driver supports FCoE offload for the QLogic devices.

View File

@ -1,7 +1,7 @@
/* bnx2fc.h: QLogic NetXtreme II Linux FCoE offload driver.
/* bnx2fc.h: QLogic Linux FCoE offload driver.
*
* Copyright (c) 2008 - 2013 Broadcom Corporation
* Copyright (c) 2014, QLogic Corporation
* Copyright (c) 2008-2013 Broadcom Corporation
* Copyright (c) 2014-2015 QLogic Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@ -65,7 +65,7 @@
#include "bnx2fc_constants.h"
#define BNX2FC_NAME "bnx2fc"
#define BNX2FC_VERSION "2.4.2"
#define BNX2FC_VERSION "2.9.6"
#define PFX "bnx2fc: "
@ -303,7 +303,6 @@ struct bnx2fc_rport {
#define BNX2FC_FLAG_OFLD_REQ_CMPL 0x5
#define BNX2FC_FLAG_CTX_ALLOC_FAILURE 0x6
#define BNX2FC_FLAG_UPLD_REQ_COMPL 0x7
#define BNX2FC_FLAG_EXPL_LOGO 0x8
#define BNX2FC_FLAG_DISABLE_FAILED 0x9
#define BNX2FC_FLAG_ENABLED 0xa

View File

@ -1,9 +1,9 @@
/* bnx2fc_constants.h: QLogic NetXtreme II Linux FCoE offload driver.
/* bnx2fc_constants.h: QLogic Linux FCoE offload driver.
* Handles operations such as session offload/upload etc, and manages
* session resources such as connection id and qp resources.
*
* Copyright (c) 2008 - 2013 Broadcom Corporation
* Copyright (c) 2014, QLogic Corporation
* Copyright (c) 2008-2013 Broadcom Corporation
* Copyright (c) 2014-2015 QLogic Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by

View File

@ -1,9 +1,9 @@
/* bnx2fc_debug.c: QLogic NetXtreme II Linux FCoE offload driver.
/* bnx2fc_debug.c: QLogic Linux FCoE offload driver.
* Handles operations such as session offload/upload etc, and manages
* session resources such as connection id and qp resources.
*
* Copyright (c) 2008 - 2013 Broadcom Corporation
* Copyright (c) 2014, QLogic Corporation
* Copyright (c) 2008-2013 Broadcom Corporation
* Copyright (c) 2014-2015 QLogic Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by

View File

@ -1,9 +1,9 @@
/* bnx2fc_debug.h: QLogic NetXtreme II Linux FCoE offload driver.
/* bnx2fc_debug.h: QLogic Linux FCoE offload driver.
* Handles operations such as session offload/upload etc, and manages
* session resources such as connection id and qp resources.
*
* Copyright (c) 2008 - 2013 Broadcom Corporation
* Copyright (c) 2014, QLogic Corporation
* Copyright (c) 2008-2013 Broadcom Corporation
* Copyright (c) 2014-2015 QLogic Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by

View File

@ -1,10 +1,10 @@
/*
* bnx2fc_els.c: QLogic NetXtreme II Linux FCoE offload driver.
* bnx2fc_els.c: QLogic Linux FCoE offload driver.
* This file contains helper routines that handle ELS requests
* and responses.
*
* Copyright (c) 2008 - 2013 Broadcom Corporation
* Copyright (c) 2014, QLogic Corporation
* Copyright (c) 2008-2013 Broadcom Corporation
* Copyright (c) 2014-2015 QLogic Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@ -689,8 +689,7 @@ static int bnx2fc_initiate_els(struct bnx2fc_rport *tgt, unsigned int op,
rc = -EINVAL;
goto els_err;
}
if (!(test_bit(BNX2FC_FLAG_SESSION_READY, &tgt->flags)) ||
(test_bit(BNX2FC_FLAG_EXPL_LOGO, &tgt->flags))) {
if (!(test_bit(BNX2FC_FLAG_SESSION_READY, &tgt->flags))) {
printk(KERN_ERR PFX "els 0x%x: tgt not ready\n", op);
rc = -EINVAL;
goto els_err;
@ -707,6 +706,7 @@ static int bnx2fc_initiate_els(struct bnx2fc_rport *tgt, unsigned int op,
els_req->cb_func = cb_func;
cb_arg->io_req = els_req;
els_req->cb_arg = cb_arg;
els_req->data_xfer_len = data_len;
mp_req = (struct bnx2fc_mp_req *)&(els_req->mp_req);
rc = bnx2fc_init_mp_req(els_req);

View File

@ -1,10 +1,10 @@
/* bnx2fc_fcoe.c: QLogic NetXtreme II Linux FCoE offload driver.
/* bnx2fc_fcoe.c: QLogic Linux FCoE offload driver.
* This file contains the code that interacts with libfc, libfcoe,
* cnic modules to create FCoE instances, send/receive non-offloaded
* FIP/FCoE packets, listen to link events etc.
*
* Copyright (c) 2008 - 2013 Broadcom Corporation
* Copyright (c) 2014, QLogic Corporation
* Copyright (c) 2008-2013 Broadcom Corporation
* Copyright (c) 2014-2015 QLogic Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@ -23,16 +23,16 @@ DEFINE_PER_CPU(struct bnx2fc_percpu_s, bnx2fc_percpu);
#define DRV_MODULE_NAME "bnx2fc"
#define DRV_MODULE_VERSION BNX2FC_VERSION
#define DRV_MODULE_RELDATE "Dec 11, 2013"
#define DRV_MODULE_RELDATE "October 15, 2015"
static char version[] =
"QLogic NetXtreme II FCoE Driver " DRV_MODULE_NAME \
"QLogic FCoE Driver " DRV_MODULE_NAME \
" v" DRV_MODULE_VERSION " (" DRV_MODULE_RELDATE ")\n";
MODULE_AUTHOR("Bhanu Prakash Gollapudi <bprakash@broadcom.com>");
MODULE_DESCRIPTION("QLogic NetXtreme II BCM57710 FCoE Driver");
MODULE_DESCRIPTION("QLogic FCoE Driver");
MODULE_LICENSE("GPL");
MODULE_VERSION(DRV_MODULE_VERSION);
@ -2091,7 +2091,7 @@ static int __bnx2fc_enable(struct fcoe_ctlr *ctlr)
{
struct bnx2fc_interface *interface = fcoe_ctlr_priv(ctlr);
struct bnx2fc_hba *hba;
struct cnic_fc_npiv_tbl npiv_tbl;
struct cnic_fc_npiv_tbl *npiv_tbl;
struct fc_lport *lport;
if (interface->enabled == false) {
@ -2123,11 +2123,16 @@ static int __bnx2fc_enable(struct fcoe_ctlr *ctlr)
if (!hba->cnic->get_fc_npiv_tbl)
goto done;
memset(&npiv_tbl, 0, sizeof(npiv_tbl));
if (hba->cnic->get_fc_npiv_tbl(hba->cnic, &npiv_tbl))
npiv_tbl = kzalloc(sizeof(struct cnic_fc_npiv_tbl), GFP_KERNEL);
if (!npiv_tbl)
goto done;
bnx2fc_npiv_create_vports(lport, &npiv_tbl);
if (hba->cnic->get_fc_npiv_tbl(hba->cnic, npiv_tbl))
goto done_free;
bnx2fc_npiv_create_vports(lport, npiv_tbl);
done_free:
kfree(npiv_tbl);
done:
return 0;
}
@ -2862,7 +2867,6 @@ static struct scsi_host_template bnx2fc_shost_template = {
.use_clustering = ENABLE_CLUSTERING,
.sg_tablesize = BNX2FC_MAX_BDS_PER_CMD,
.max_sectors = 1024,
.use_blk_tags = 1,
.track_queue_depth = 1,
};

View File

@ -1,9 +1,9 @@
/* bnx2fc_hwi.c: QLogic NetXtreme II Linux FCoE offload driver.
/* bnx2fc_hwi.c: QLogic Linux FCoE offload driver.
* This file contains the code that low level functions that interact
* with 57712 FCoE firmware.
*
* Copyright (c) 2008 - 2013 Broadcom Corporation
* Copyright (c) 2014, QLogic Corporation
* Copyright (c) 2008-2013 Broadcom Corporation
* Copyright (c) 2014-2015 QLogic Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by

View File

@ -1,8 +1,8 @@
/* bnx2fc_io.c: QLogic NetXtreme II Linux FCoE offload driver.
/* bnx2fc_io.c: QLogic Linux FCoE offload driver.
* IO manager and SCSI IO processing.
*
* Copyright (c) 2008 - 2013 Broadcom Corporation
* Copyright (c) 2014, QLogic Corporation
* Copyright (c) 2008-2013 Broadcom Corporation
* Copyright (c) 2014-2015 QLogic Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@ -40,11 +40,8 @@ static void bnx2fc_cmd_timeout(struct work_struct *work)
{
struct bnx2fc_cmd *io_req = container_of(work, struct bnx2fc_cmd,
timeout_work.work);
struct fc_lport *lport;
struct fc_rport_priv *rdata;
u8 cmd_type = io_req->cmd_type;
struct bnx2fc_rport *tgt = io_req->tgt;
int logo_issued;
int rc;
BNX2FC_IO_DBG(io_req, "cmd_timeout, cmd_type = %d,"
@ -80,25 +77,14 @@ static void bnx2fc_cmd_timeout(struct work_struct *work)
io_req->refcount.refcount.counter);
if (!(test_and_set_bit(BNX2FC_FLAG_ABTS_DONE,
&io_req->req_flags))) {
lport = io_req->port->lport;
rdata = io_req->tgt->rdata;
logo_issued = test_and_set_bit(
BNX2FC_FLAG_EXPL_LOGO,
&tgt->flags);
/*
* Cleanup and return original command to
* mid-layer.
*/
bnx2fc_initiate_cleanup(io_req);
kref_put(&io_req->refcount, bnx2fc_cmd_release);
spin_unlock_bh(&tgt->tgt_lock);
/* Explicitly logo the target */
if (!logo_issued) {
BNX2FC_IO_DBG(io_req, "Explicit "
"logo - tgt flags = 0x%lx\n",
tgt->flags);
mutex_lock(&lport->disc.disc_mutex);
lport->tt.rport_logoff(rdata);
mutex_unlock(&lport->disc.disc_mutex);
}
return;
}
} else {
@ -116,28 +102,10 @@ static void bnx2fc_cmd_timeout(struct work_struct *work)
rc = bnx2fc_initiate_abts(io_req);
if (rc == SUCCESS)
goto done;
/*
* Explicitly logo the target if
* abts initiation fails
*/
lport = io_req->port->lport;
rdata = io_req->tgt->rdata;
logo_issued = test_and_set_bit(
BNX2FC_FLAG_EXPL_LOGO,
&tgt->flags);
kref_put(&io_req->refcount, bnx2fc_cmd_release);
spin_unlock_bh(&tgt->tgt_lock);
if (!logo_issued) {
BNX2FC_IO_DBG(io_req, "Explicit "
"logo - tgt flags = 0x%lx\n",
tgt->flags);
mutex_lock(&lport->disc.disc_mutex);
lport->tt.rport_logoff(rdata);
mutex_unlock(&lport->disc.disc_mutex);
}
return;
} else {
BNX2FC_IO_DBG(io_req, "IO already in "
@ -152,22 +120,9 @@ static void bnx2fc_cmd_timeout(struct work_struct *work)
if (!test_and_set_bit(BNX2FC_FLAG_ABTS_DONE,
&io_req->req_flags)) {
lport = io_req->port->lport;
rdata = io_req->tgt->rdata;
logo_issued = test_and_set_bit(
BNX2FC_FLAG_EXPL_LOGO,
&tgt->flags);
kref_put(&io_req->refcount, bnx2fc_cmd_release);
spin_unlock_bh(&tgt->tgt_lock);
/* Explicitly logo the target */
if (!logo_issued) {
BNX2FC_IO_DBG(io_req, "Explicitly logo"
"(els)\n");
mutex_lock(&lport->disc.disc_mutex);
lport->tt.rport_logoff(rdata);
mutex_unlock(&lport->disc.disc_mutex);
}
return;
}
} else {
@ -623,8 +578,12 @@ int bnx2fc_init_mp_req(struct bnx2fc_cmd *io_req)
mp_req = (struct bnx2fc_mp_req *)&(io_req->mp_req);
memset(mp_req, 0, sizeof(struct bnx2fc_mp_req));
mp_req->req_len = sizeof(struct fcp_cmnd);
io_req->data_xfer_len = mp_req->req_len;
if (io_req->cmd_type != BNX2FC_ELS) {
mp_req->req_len = sizeof(struct fcp_cmnd);
io_req->data_xfer_len = mp_req->req_len;
} else
mp_req->req_len = io_req->data_xfer_len;
mp_req->req_buf = dma_alloc_coherent(&hba->pcidev->dev, CNIC_PAGE_SIZE,
&mp_req->req_buf_dma,
GFP_ATOMIC);
@ -1108,18 +1067,11 @@ int bnx2fc_eh_device_reset(struct scsi_cmnd *sc_cmd)
return bnx2fc_initiate_tmf(sc_cmd, FCP_TMF_LUN_RESET);
}
int bnx2fc_expl_logo(struct fc_lport *lport, struct bnx2fc_cmd *io_req)
int bnx2fc_abts_cleanup(struct bnx2fc_cmd *io_req)
{
struct bnx2fc_rport *tgt = io_req->tgt;
struct fc_rport_priv *rdata = tgt->rdata;
int logo_issued;
int rc = SUCCESS;
int wait_cnt = 0;
BNX2FC_IO_DBG(io_req, "Expl logo - tgt flags = 0x%lx\n",
tgt->flags);
logo_issued = test_and_set_bit(BNX2FC_FLAG_EXPL_LOGO,
&tgt->flags);
io_req->wait_for_comp = 1;
bnx2fc_initiate_cleanup(io_req);
@ -1132,21 +1084,8 @@ int bnx2fc_expl_logo(struct fc_lport *lport, struct bnx2fc_cmd *io_req)
* release the reference taken in eh_abort to allow the
* target to re-login after flushing IOs
*/
kref_put(&io_req->refcount, bnx2fc_cmd_release);
kref_put(&io_req->refcount, bnx2fc_cmd_release);
if (!logo_issued) {
clear_bit(BNX2FC_FLAG_SESSION_READY, &tgt->flags);
mutex_lock(&lport->disc.disc_mutex);
lport->tt.rport_logoff(rdata);
mutex_unlock(&lport->disc.disc_mutex);
do {
msleep(BNX2FC_RELOGIN_WAIT_TIME);
if (wait_cnt++ > BNX2FC_RELOGIN_WAIT_CNT) {
rc = FAILED;
break;
}
} while (!test_bit(BNX2FC_FLAG_SESSION_READY, &tgt->flags));
}
spin_lock_bh(&tgt->tgt_lock);
return rc;
}
@ -1248,7 +1187,7 @@ int bnx2fc_eh_abort(struct scsi_cmnd *sc_cmd)
if (cancel_delayed_work(&io_req->timeout_work))
kref_put(&io_req->refcount,
bnx2fc_cmd_release); /* drop timer hold */
rc = bnx2fc_expl_logo(lport, io_req);
rc = bnx2fc_abts_cleanup(io_req);
/* This only occurs when an task abort was requested while ABTS
is in progress. Setting the IO_CLEANUP flag will skip the
RRQ process in the case when the fw generated SCSI_CMD cmpl
@ -1287,7 +1226,7 @@ int bnx2fc_eh_abort(struct scsi_cmnd *sc_cmd)
/* Let the scsi-ml try to recover this command */
printk(KERN_ERR PFX "abort failed, xid = 0x%x\n",
io_req->xid);
rc = bnx2fc_expl_logo(lport, io_req);
rc = bnx2fc_abts_cleanup(io_req);
goto out;
} else {
/*
@ -1755,7 +1694,10 @@ static void bnx2fc_parse_fcp_rsp(struct bnx2fc_cmd *io_req,
int fcp_rsp_len = 0;
io_req->fcp_status = FC_GOOD;
io_req->fcp_resid = fcp_rsp->fcp_resid;
io_req->fcp_resid = 0;
if (rsp_flags & (FCOE_FCP_RSP_FLAGS_FCP_RESID_OVER |
FCOE_FCP_RSP_FLAGS_FCP_RESID_UNDER))
io_req->fcp_resid = fcp_rsp->fcp_resid;
io_req->scsi_comp_flags = rsp_flags;
CMD_SCSI_STATUS(sc_cmd) = io_req->cdb_status =

View File

@ -1,9 +1,9 @@
/* bnx2fc_tgt.c: QLogic NetXtreme II Linux FCoE offload driver.
/* bnx2fc_tgt.c: QLogic Linux FCoE offload driver.
* Handles operations such as session offload/upload etc, and manages
* session resources such as connection id and qp resources.
*
* Copyright (c) 2008 - 2013 Broadcom Corporation
* Copyright (c) 2014, QLogic Corporation
* Copyright (c) 2008-2013 Broadcom Corporation
* Copyright (c) 2014-2015 QLogic Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@ -560,12 +560,6 @@ void bnx2fc_rport_event_handler(struct fc_lport *lport,
(hba->num_ofld_sess == 0)) {
wake_up_interruptible(&hba->shutdown_wait);
}
if (test_bit(BNX2FC_FLAG_EXPL_LOGO, &tgt->flags)) {
printk(KERN_ERR PFX "Relogin to the tgt\n");
mutex_lock(&lport->disc.disc_mutex);
lport->tt.rport_login(rdata);
mutex_unlock(&lport->disc.disc_mutex);
}
mutex_unlock(&hba->hba_mutex);
break;

View File

@ -2283,7 +2283,6 @@ struct scsi_host_template csio_fcoe_shost_template = {
.use_clustering = ENABLE_CLUSTERING,
.shost_attrs = csio_fcoe_lport_attrs,
.max_sectors = CSIO_MAX_SECTOR_SIZE,
.use_blk_tags = 1,
};
struct scsi_host_template csio_fcoe_shost_vport_template = {
@ -2303,7 +2302,6 @@ struct scsi_host_template csio_fcoe_shost_vport_template = {
.use_clustering = ENABLE_CLUSTERING,
.shost_attrs = csio_fcoe_vport_attrs,
.max_sectors = CSIO_MAX_SECTOR_SIZE,
.use_blk_tags = 1,
};
/*

View File

@ -256,7 +256,6 @@ static struct scsi_host_template driver_template = {
.proc_name = ESAS2R_DRVR_NAME,
.change_queue_depth = scsi_change_queue_depth,
.max_sectors = 0xFFFF,
.use_blk_tags = 1,
};
int sgl_page_size = 512;

View File

@ -2694,7 +2694,6 @@ struct scsi_host_template scsi_esp_template = {
.use_clustering = ENABLE_CLUSTERING,
.max_sectors = 0xffff,
.skip_settle_delay = 1,
.use_blk_tags = 1,
};
EXPORT_SYMBOL(scsi_esp_template);

View File

@ -287,7 +287,6 @@ static struct scsi_host_template fcoe_shost_template = {
.use_clustering = ENABLE_CLUSTERING,
.sg_tablesize = SG_ALL,
.max_sectors = 0xffff,
.use_blk_tags = 1,
.track_queue_depth = 1,
};
@ -1873,7 +1872,6 @@ static int fcoe_percpu_receive_thread(void *arg)
set_user_nice(current, MIN_NICE);
retry:
while (!kthread_should_stop()) {
spin_lock_bh(&p->fcoe_rx_list.lock);
@ -1883,7 +1881,7 @@ static int fcoe_percpu_receive_thread(void *arg)
set_current_state(TASK_INTERRUPTIBLE);
spin_unlock_bh(&p->fcoe_rx_list.lock);
schedule();
goto retry;
continue;
}
spin_unlock_bh(&p->fcoe_rx_list.lock);

View File

@ -118,7 +118,6 @@ static struct scsi_host_template fnic_host_template = {
.sg_tablesize = FNIC_MAX_SG_DESC_CNT,
.max_sectors = 0xffff,
.shost_attrs = fnic_attrs,
.use_blk_tags = 1,
.track_queue_depth = 1,
};
@ -697,13 +696,6 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
}
fnic->fnic_max_tag_id = host->can_queue;
err = scsi_init_shared_tag_map(host, fnic->fnic_max_tag_id);
if (err) {
shost_printk(KERN_ERR, fnic->lport->host,
"Unable to alloc shared tag map\n");
goto err_out_dev_close;
}
host->max_lun = fnic->config.luns_per_tgt;
host->max_id = FNIC_MAX_FCP_TARGET;
host->max_cmd_len = FCOE_MAX_CMD_LEN;

View File

@ -217,6 +217,13 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev,
error = scsi_mq_setup_tags(shost);
if (error)
goto fail;
} else {
shost->bqt = blk_init_tags(shost->can_queue,
shost->hostt->tag_alloc_policy);
if (!shost->bqt) {
error = -ENOMEM;
goto fail;
}
}
/*

File diff suppressed because it is too large Load Diff

View File

@ -33,12 +33,38 @@ struct access_method {
unsigned long (*command_completed)(struct ctlr_info *h, u8 q);
};
/* for SAS hosts and SAS expanders */
struct hpsa_sas_node {
struct device *parent_dev;
struct list_head port_list_head;
};
struct hpsa_sas_port {
struct list_head port_list_entry;
u64 sas_address;
struct sas_port *port;
int next_phy_index;
struct list_head phy_list_head;
struct hpsa_sas_node *parent_node;
struct sas_rphy *rphy;
};
struct hpsa_sas_phy {
struct list_head phy_list_entry;
struct sas_phy *phy;
struct hpsa_sas_port *parent_port;
bool added_to_port;
};
struct hpsa_scsi_dev_t {
int devtype;
unsigned int devtype;
int bus, target, lun; /* as presented to the OS */
unsigned char scsi3addr[8]; /* as presented to the HW */
u8 physical_device : 1;
u8 expose_device;
#define RAID_CTLR_LUNID "\0\0\0\0\0\0\0\0"
unsigned char device_id[16]; /* from inquiry pg. 0x83 */
u64 sas_address;
unsigned char vendor[8]; /* bytes 8-15 of inquiry data */
unsigned char model[16]; /* bytes 16-31 of inquiry data */
unsigned char raid_level; /* from inquiry page 0xC1 */
@ -75,11 +101,8 @@ struct hpsa_scsi_dev_t {
struct hpsa_scsi_dev_t *phys_disk[RAID_MAP_MAX_ENTRIES];
int nphysical_disks;
int supports_aborts;
#define HPSA_DO_NOT_EXPOSE 0x0
#define HPSA_SG_ATTACH 0x1
#define HPSA_ULD_ATTACH 0x2
#define HPSA_SCSI_ADD (HPSA_SG_ATTACH | HPSA_ULD_ATTACH)
u8 expose_state;
struct hpsa_sas_port *sas_port;
int external; /* 1-from external array 0-not <0-unknown */
};
struct reply_queue_buffer {
@ -136,6 +159,7 @@ struct ctlr_info {
char *product_name;
struct pci_dev *pdev;
u32 board_id;
u64 sas_address;
void __iomem *vaddr;
unsigned long paddr;
int nr_cmds; /* Number of commands allowed on this controller */
@ -262,7 +286,10 @@ struct ctlr_info {
spinlock_t offline_device_lock;
struct list_head offline_device_list;
int acciopath_status;
int drv_req_rescan;
int raid_offload_debug;
int discovery_polling;
struct ReportLUNdata *lastlogicals;
int needs_abort_tags_swizzled;
struct workqueue_struct *resubmit_wq;
struct workqueue_struct *rescan_ctlr_wq;
@ -270,6 +297,8 @@ struct ctlr_info {
wait_queue_head_t abort_cmd_wait_queue;
wait_queue_head_t event_sync_wait_queue;
struct mutex reset_mutex;
u8 reset_in_progress;
struct hpsa_sas_node *sas_host;
};
struct offline_device_entry {
@ -283,6 +312,7 @@ struct offline_device_entry {
#define HPSA_RESET_TYPE_BUS 0x01
#define HPSA_RESET_TYPE_TARGET 0x03
#define HPSA_RESET_TYPE_LUN 0x04
#define HPSA_PHYS_TARGET_RESET 0x99 /* not defined by cciss spec */
#define HPSA_MSG_SEND_RETRY_LIMIT 10
#define HPSA_MSG_SEND_RETRY_INTERVAL_MSECS (10000)
@ -367,6 +397,11 @@ struct offline_device_entry {
#define IOACCEL2_INBOUND_POSTQ_64_LOW 0xd0
#define IOACCEL2_INBOUND_POSTQ_64_HI 0xd4
#define HPSA_PHYSICAL_DEVICE_BUS 0
#define HPSA_RAID_VOLUME_BUS 1
#define HPSA_EXTERNAL_RAID_VOLUME_BUS 2
#define HPSA_HBA_BUS 3
/*
Send the command to the hardware
*/

View File

@ -260,8 +260,6 @@ struct ext_report_lun_entry {
u8 wwid[8];
u8 device_type;
u8 device_flags;
#define NON_DISK_PHYS_DEV(x) ((x)[17] & 0x01)
#define PHYS_IOACCEL(x) ((x)[17] & 0x08)
u8 lun_count; /* multi-lun device, how many luns */
u8 redundant_paths;
u32 ioaccel_handle; /* ioaccel1 only uses lower 16 bits */
@ -288,6 +286,11 @@ struct SenseSubsystem_info {
#define BMIC_FLASH_FIRMWARE 0xF7
#define BMIC_SENSE_CONTROLLER_PARAMETERS 0x64
#define BMIC_IDENTIFY_PHYSICAL_DEVICE 0x15
#define BMIC_IDENTIFY_CONTROLLER 0x11
#define BMIC_SET_DIAG_OPTIONS 0xF4
#define BMIC_SENSE_DIAG_OPTIONS 0xF5
#define HPSA_DIAG_OPTS_DISABLE_RLD_CACHING 0x40000000
#define BMIC_SENSE_SUBSYSTEM_INFORMATION 0x66
/* Command List Structure */
union SCSI3Addr {
@ -684,6 +687,16 @@ struct hpsa_pci_info {
u32 board_id;
};
struct bmic_identify_controller {
u8 configured_logical_drive_count; /* offset 0 */
u8 pad1[153];
__le16 extended_logical_unit_count; /* offset 154 */
u8 pad2[136];
u8 controller_mode; /* offset 292 */
u8 pad3[32];
};
struct bmic_identify_physical_device {
u8 scsi_bus; /* SCSI Bus number on controller */
u8 scsi_id; /* SCSI ID on this bus */
@ -816,5 +829,18 @@ struct bmic_identify_physical_device {
u8 padding[112];
};
struct bmic_sense_subsystem_info {
u8 primary_slot_number;
u8 reserved[3];
u8 chasis_serial_number[32];
u8 primary_world_wide_id[8];
u8 primary_array_serial_number[32]; /* NULL terminated */
u8 primary_cache_serial_number[32]; /* NULL terminated */
u8 reserved_2[8];
u8 secondary_array_serial_number[32];
u8 secondary_cache_serial_number[32];
u8 pad[332];
};
#pragma pack()
#endif /* HPSA_CMD_H */

View File

@ -3095,7 +3095,6 @@ static struct scsi_host_template driver_template = {
.max_sectors = IBMVFC_MAX_SECTORS,
.use_clustering = ENABLE_CLUSTERING,
.shost_attrs = ibmvfc_attrs,
.use_blk_tags = 1,
.track_queue_depth = 1,
};

View File

@ -106,9 +106,9 @@ MODULE_LICENSE("GPL");
MODULE_VERSION(IBMVSCSI_VERSION);
module_param_named(max_id, max_id, int, S_IRUGO | S_IWUSR);
MODULE_PARM_DESC(max_id, "Largest ID value for each channel");
MODULE_PARM_DESC(max_id, "Largest ID value for each channel [Default=64]");
module_param_named(max_channel, max_channel, int, S_IRUGO | S_IWUSR);
MODULE_PARM_DESC(max_channel, "Largest channel value");
MODULE_PARM_DESC(max_channel, "Largest channel value [Default=3]");
module_param_named(init_timeout, init_timeout, int, S_IRUGO | S_IWUSR);
MODULE_PARM_DESC(init_timeout, "Initialization timeout in seconds");
module_param_named(max_requests, max_requests, int, S_IRUGO);
@ -2289,11 +2289,15 @@ static int ibmvscsi_probe(struct vio_dev *vdev, const struct vio_device_id *id)
goto init_pool_failed;
}
host->max_lun = 8;
host->max_lun = IBMVSCSI_MAX_LUN;
host->max_id = max_id;
host->max_channel = max_channel;
host->max_cmd_len = 16;
dev_info(dev,
"Maximum ID: %d Maximum LUN: %llu Maximum Channel: %d\n",
host->max_id, host->max_lun, host->max_channel);
if (scsi_add_host(hostdata->host, hostdata->dev))
goto add_host_failed;

View File

@ -48,6 +48,7 @@ struct Scsi_Host;
#define IBMVSCSI_CMDS_PER_LUN_DEFAULT 16
#define IBMVSCSI_MAX_SECTORS_DEFAULT 256 /* 32 * 8 = default max I/O 32 pages */
#define IBMVSCSI_MAX_CMDS_PER_LUN 64
#define IBMVSCSI_MAX_LUN 32
/* ------------------------------------------------------------
* Data Structures

View File

@ -6363,15 +6363,19 @@ static int ipr_queuecommand(struct Scsi_Host *shost,
ipr_cmd->scsi_cmd = scsi_cmd;
ipr_cmd->done = ipr_scsi_eh_done;
if (ipr_is_gscsi(res) || ipr_is_vset_device(res)) {
if (ipr_is_gscsi(res)) {
if (scsi_cmd->underflow == 0)
ioarcb->cmd_pkt.flags_hi |= IPR_FLAGS_HI_NO_ULEN_CHK;
ioarcb->cmd_pkt.flags_hi |= IPR_FLAGS_HI_NO_LINK_DESC;
if (ipr_is_gscsi(res) && res->reset_occurred) {
if (res->reset_occurred) {
res->reset_occurred = 0;
ioarcb->cmd_pkt.flags_lo |= IPR_FLAGS_LO_DELAY_AFTER_RST;
}
}
if (ipr_is_gscsi(res) || ipr_is_vset_device(res)) {
ioarcb->cmd_pkt.flags_hi |= IPR_FLAGS_HI_NO_LINK_DESC;
ioarcb->cmd_pkt.flags_lo |= IPR_FLAGS_LO_ALIGNED_BFR;
if (scsi_cmd->flags & SCMD_TAGGED)
ioarcb->cmd_pkt.flags_lo |= IPR_FLAGS_LO_SIMPLE_TASK;
@ -6502,7 +6506,6 @@ static struct scsi_host_template driver_template = {
.shost_attrs = ipr_ioa_attrs,
.sdev_attrs = ipr_dev_attrs,
.proc_name = IPR_NAME,
.use_blk_tags = 1,
};
/**
@ -7671,6 +7674,63 @@ static int ipr_ioafp_query_ioa_cfg(struct ipr_cmnd *ipr_cmd)
return IPR_RC_JOB_RETURN;
}
static int ipr_ioa_service_action_failed(struct ipr_cmnd *ipr_cmd)
{
u32 ioasc = be32_to_cpu(ipr_cmd->s.ioasa.hdr.ioasc);
if (ioasc == IPR_IOASC_IR_INVALID_REQ_TYPE_OR_PKT)
return IPR_RC_JOB_CONTINUE;
return ipr_reset_cmd_failed(ipr_cmd);
}
static void ipr_build_ioa_service_action(struct ipr_cmnd *ipr_cmd,
__be32 res_handle, u8 sa_code)
{
struct ipr_ioarcb *ioarcb = &ipr_cmd->ioarcb;
ioarcb->res_handle = res_handle;
ioarcb->cmd_pkt.cdb[0] = IPR_IOA_SERVICE_ACTION;
ioarcb->cmd_pkt.cdb[1] = sa_code;
ioarcb->cmd_pkt.request_type = IPR_RQTYPE_IOACMD;
}
/**
* ipr_ioafp_set_caching_parameters - Issue Set Cache parameters service
* action
*
* Return value:
* none
**/
static int ipr_ioafp_set_caching_parameters(struct ipr_cmnd *ipr_cmd)
{
struct ipr_ioarcb *ioarcb = &ipr_cmd->ioarcb;
struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
struct ipr_inquiry_pageC4 *pageC4 = &ioa_cfg->vpd_cbs->pageC4_data;
ENTER;
ipr_cmd->job_step = ipr_ioafp_query_ioa_cfg;
if (pageC4->cache_cap[0] & IPR_CAP_SYNC_CACHE) {
ipr_build_ioa_service_action(ipr_cmd,
cpu_to_be32(IPR_IOA_RES_HANDLE),
IPR_IOA_SA_CHANGE_CACHE_PARAMS);
ioarcb->cmd_pkt.cdb[2] = 0x40;
ipr_cmd->job_step_failed = ipr_ioa_service_action_failed;
ipr_do_req(ipr_cmd, ipr_reset_ioa_job, ipr_timeout,
IPR_SET_SUP_DEVICE_TIMEOUT);
LEAVE;
return IPR_RC_JOB_RETURN;
}
LEAVE;
return IPR_RC_JOB_CONTINUE;
}
/**
* ipr_ioafp_inquiry - Send an Inquiry to the adapter.
* @ipr_cmd: ipr command struct
@ -7721,6 +7781,39 @@ static int ipr_inquiry_page_supported(struct ipr_inquiry_page0 *page0, u8 page)
return 0;
}
/**
* ipr_ioafp_pageC4_inquiry - Send a Page 0xC4 Inquiry to the adapter.
* @ipr_cmd: ipr command struct
*
* This function sends a Page 0xC4 inquiry to the adapter
* to retrieve software VPD information.
*
* Return value:
* IPR_RC_JOB_CONTINUE / IPR_RC_JOB_RETURN
**/
static int ipr_ioafp_pageC4_inquiry(struct ipr_cmnd *ipr_cmd)
{
struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
struct ipr_inquiry_page0 *page0 = &ioa_cfg->vpd_cbs->page0_data;
struct ipr_inquiry_pageC4 *pageC4 = &ioa_cfg->vpd_cbs->pageC4_data;
ENTER;
ipr_cmd->job_step = ipr_ioafp_set_caching_parameters;
memset(pageC4, 0, sizeof(*pageC4));
if (ipr_inquiry_page_supported(page0, 0xC4)) {
ipr_ioafp_inquiry(ipr_cmd, 1, 0xC4,
(ioa_cfg->vpd_cbs_dma
+ offsetof(struct ipr_misc_cbs,
pageC4_data)),
sizeof(struct ipr_inquiry_pageC4));
return IPR_RC_JOB_RETURN;
}
LEAVE;
return IPR_RC_JOB_CONTINUE;
}
/**
* ipr_ioafp_cap_inquiry - Send a Page 0xD0 Inquiry to the adapter.
* @ipr_cmd: ipr command struct
@ -7738,7 +7831,7 @@ static int ipr_ioafp_cap_inquiry(struct ipr_cmnd *ipr_cmd)
struct ipr_inquiry_cap *cap = &ioa_cfg->vpd_cbs->cap;
ENTER;
ipr_cmd->job_step = ipr_ioafp_query_ioa_cfg;
ipr_cmd->job_step = ipr_ioafp_pageC4_inquiry;
memset(cap, 0, sizeof(*cap));
if (ipr_inquiry_page_supported(page0, 0xD0)) {
@ -8277,6 +8370,42 @@ static int ipr_reset_get_unit_check_job(struct ipr_cmnd *ipr_cmd)
return IPR_RC_JOB_RETURN;
}
static int ipr_dump_mailbox_wait(struct ipr_cmnd *ipr_cmd)
{
struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
ENTER;
if (ioa_cfg->sdt_state != GET_DUMP)
return IPR_RC_JOB_RETURN;
if (!ioa_cfg->sis64 || !ipr_cmd->u.time_left ||
(readl(ioa_cfg->regs.sense_interrupt_reg) &
IPR_PCII_MAILBOX_STABLE)) {
if (!ipr_cmd->u.time_left)
dev_err(&ioa_cfg->pdev->dev,
"Timed out waiting for Mailbox register.\n");
ioa_cfg->sdt_state = READ_DUMP;
ioa_cfg->dump_timeout = 0;
if (ioa_cfg->sis64)
ipr_reset_start_timer(ipr_cmd, IPR_SIS64_DUMP_TIMEOUT);
else
ipr_reset_start_timer(ipr_cmd, IPR_SIS32_DUMP_TIMEOUT);
ipr_cmd->job_step = ipr_reset_wait_for_dump;
schedule_work(&ioa_cfg->work_q);
} else {
ipr_cmd->u.time_left -= IPR_CHECK_FOR_RESET_TIMEOUT;
ipr_reset_start_timer(ipr_cmd,
IPR_CHECK_FOR_RESET_TIMEOUT);
}
LEAVE;
return IPR_RC_JOB_RETURN;
}
/**
* ipr_reset_restore_cfg_space - Restore PCI config space.
* @ipr_cmd: ipr command struct
@ -8326,20 +8455,11 @@ static int ipr_reset_restore_cfg_space(struct ipr_cmnd *ipr_cmd)
if (ioa_cfg->in_ioa_bringdown) {
ipr_cmd->job_step = ipr_ioa_bringdown_done;
} else if (ioa_cfg->sdt_state == GET_DUMP) {
ipr_cmd->job_step = ipr_dump_mailbox_wait;
ipr_cmd->u.time_left = IPR_WAIT_FOR_MAILBOX;
} else {
ipr_cmd->job_step = ipr_reset_enable_ioa;
if (GET_DUMP == ioa_cfg->sdt_state) {
ioa_cfg->sdt_state = READ_DUMP;
ioa_cfg->dump_timeout = 0;
if (ioa_cfg->sis64)
ipr_reset_start_timer(ipr_cmd, IPR_SIS64_DUMP_TIMEOUT);
else
ipr_reset_start_timer(ipr_cmd, IPR_SIS32_DUMP_TIMEOUT);
ipr_cmd->job_step = ipr_reset_wait_for_dump;
schedule_work(&ioa_cfg->work_q);
return IPR_RC_JOB_RETURN;
}
}
LEAVE;

View File

@ -39,8 +39,8 @@
/*
* Literals
*/
#define IPR_DRIVER_VERSION "2.6.2"
#define IPR_DRIVER_DATE "(June 11, 2015)"
#define IPR_DRIVER_VERSION "2.6.3"
#define IPR_DRIVER_DATE "(October 17, 2015)"
/*
* IPR_MAX_CMD_PER_LUN: This defines the maximum number of outstanding
@ -216,6 +216,10 @@
#define IPR_SET_ALL_SUPPORTED_DEVICES 0x80
#define IPR_IOA_SHUTDOWN 0xF7
#define IPR_WR_BUF_DOWNLOAD_AND_SAVE 0x05
#define IPR_IOA_SERVICE_ACTION 0xD2
/* IOA Service Actions */
#define IPR_IOA_SA_CHANGE_CACHE_PARAMS 0x14
/*
* Timeouts
@ -279,6 +283,9 @@
#define IPR_IPL_INIT_STAGE_TIME_MASK 0x0000ffff
#define IPR_PCII_IPL_STAGE_CHANGE (0x80000000 >> 0)
#define IPR_PCII_MAILBOX_STABLE (0x80000000 >> 4)
#define IPR_WAIT_FOR_MAILBOX (2 * HZ)
#define IPR_PCII_IOA_TRANS_TO_OPER (0x80000000 >> 0)
#define IPR_PCII_IOARCB_XFER_FAILED (0x80000000 >> 3)
#define IPR_PCII_IOA_UNIT_CHECKED (0x80000000 >> 4)
@ -846,6 +853,16 @@ struct ipr_inquiry_page0 {
u8 page[IPR_INQUIRY_PAGE0_ENTRIES];
}__attribute__((packed));
struct ipr_inquiry_pageC4 {
u8 peri_qual_dev_type;
u8 page_code;
u8 reserved1;
u8 len;
u8 cache_cap[4];
#define IPR_CAP_SYNC_CACHE 0x08
u8 reserved2[20];
} __packed;
struct ipr_hostrcb_device_data_entry {
struct ipr_vpd vpd;
struct ipr_res_addr dev_res_addr;
@ -1319,6 +1336,7 @@ struct ipr_misc_cbs {
struct ipr_inquiry_page0 page0_data;
struct ipr_inquiry_page3 page3_data;
struct ipr_inquiry_cap cap;
struct ipr_inquiry_pageC4 pageC4_data;
struct ipr_mode_pages mode_pages;
struct ipr_supported_device supp_dev;
};

View File

@ -170,7 +170,6 @@ static struct scsi_host_template isci_sht = {
.target_destroy = sas_target_destroy,
.ioctl = sas_ioctl,
.shost_attrs = isci_host_attrs,
.use_blk_tags = 1,
.track_queue_depth = 1,
};
@ -272,11 +271,11 @@ static void isci_unregister(struct isci_host *isci_host)
if (!isci_host)
return;
shost = to_shost(isci_host);
scsi_remove_host(shost);
sas_unregister_ha(&isci_host->sas_ha);
shost = to_shost(isci_host);
sas_remove_host(shost);
scsi_remove_host(shost);
scsi_host_put(shost);
}

View File

@ -25,7 +25,7 @@
#include <linux/export.h>
/**
* fc_vport_create() - Create a new NPIV vport instance
* libfc_vport_create() - Create a new NPIV vport instance
* @vport: fc_vport structure from scsi_transport_fc
* @privsize: driver private data size to allocate along with the Scsi_Host
*/

View File

@ -5173,7 +5173,6 @@ lpfc_els_rcv_lcb(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
rjt_err = LSRJT_CMD_UNSUPPORTED;
goto rjt;
}
lcb_context = kmalloc(sizeof(struct lpfc_lcb_context), GFP_KERNEL);
if (phba->hba_flag & HBA_FCOE_MODE) {
rjt_err = LSRJT_CMD_UNSUPPORTED;
@ -5204,6 +5203,12 @@ lpfc_els_rcv_lcb(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
goto rjt;
}
lcb_context = kmalloc(sizeof(*lcb_context), GFP_KERNEL);
if (!lcb_context) {
rjt_err = LSRJT_UNABLE_TPC;
goto rjt;
}
state = (beacon->lcb_sub_command == LPFC_LCB_ON) ? 1 : 0;
lcb_context->sub_command = beacon->lcb_sub_command;
lcb_context->type = beacon->lcb_type;
@ -5214,6 +5219,7 @@ lpfc_els_rcv_lcb(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
if (lpfc_sli4_set_beacon(vport, lcb_context, state)) {
lpfc_printf_vlog(ndlp->vport, KERN_ERR,
LOG_ELS, "0193 failed to send mail box");
kfree(lcb_context);
lpfc_nlp_put(ndlp);
rjt_err = LSRJT_UNABLE_TPC;
goto rjt;

View File

@ -5914,7 +5914,6 @@ struct scsi_host_template lpfc_template_s3 = {
.max_sectors = 0xFFFF,
.vendor_id = LPFC_NL_VENDOR_ID,
.change_queue_depth = scsi_change_queue_depth,
.use_blk_tags = 1,
.track_queue_depth = 1,
};
@ -5940,7 +5939,6 @@ struct scsi_host_template lpfc_template = {
.max_sectors = 0xFFFF,
.vendor_id = LPFC_NL_VENDOR_ID,
.change_queue_depth = scsi_change_queue_depth,
.use_blk_tags = 1,
.track_queue_depth = 1,
};
@ -5964,6 +5962,5 @@ struct scsi_host_template lpfc_vport_template = {
.shost_attrs = lpfc_vport_attrs,
.max_sectors = 0xFFFF,
.change_queue_depth = scsi_change_queue_depth,
.use_blk_tags = 1,
.track_queue_depth = 1,
};

View File

@ -35,8 +35,8 @@
/*
* MegaRAID SAS Driver meta data
*/
#define MEGASAS_VERSION "06.807.10.00-rc1"
#define MEGASAS_RELDATE "March 6, 2015"
#define MEGASAS_VERSION "06.808.16.00-rc1"
#define MEGASAS_RELDATE "Oct. 8, 2015"
/*
* Device IDs
@ -52,6 +52,10 @@
#define PCI_DEVICE_ID_LSI_PLASMA 0x002f
#define PCI_DEVICE_ID_LSI_INVADER 0x005d
#define PCI_DEVICE_ID_LSI_FURY 0x005f
#define PCI_DEVICE_ID_LSI_INTRUDER 0x00ce
#define PCI_DEVICE_ID_LSI_INTRUDER_24 0x00cf
#define PCI_DEVICE_ID_LSI_CUTLASS_52 0x0052
#define PCI_DEVICE_ID_LSI_CUTLASS_53 0x0053
/*
* Intel HBA SSDIDs
@ -62,6 +66,14 @@
#define MEGARAID_INTEL_RS3MC044_SSDID 0x9381
#define MEGARAID_INTEL_RS3WC080_SSDID 0x9341
#define MEGARAID_INTEL_RS3WC040_SSDID 0x9343
#define MEGARAID_INTEL_RMS3BC160_SSDID 0x352B
/*
* Intruder HBA SSDIDs
*/
#define MEGARAID_INTRUDER_SSDID1 0x9371
#define MEGARAID_INTRUDER_SSDID2 0x9390
#define MEGARAID_INTRUDER_SSDID3 0x9370
/*
* Intel HBA branding
@ -78,6 +90,8 @@
"Intel(R) RAID Controller RS3WC080"
#define MEGARAID_INTEL_RS3WC040_BRANDING \
"Intel(R) RAID Controller RS3WC040"
#define MEGARAID_INTEL_RMS3BC160_BRANDING \
"Intel(R) Integrated RAID Module RMS3BC160"
/*
* =====================================
@ -273,6 +287,16 @@ enum MFI_STAT {
MFI_STAT_INVALID_STATUS = 0xFF
};
enum mfi_evt_class {
MFI_EVT_CLASS_DEBUG = -2,
MFI_EVT_CLASS_PROGRESS = -1,
MFI_EVT_CLASS_INFO = 0,
MFI_EVT_CLASS_WARNING = 1,
MFI_EVT_CLASS_CRITICAL = 2,
MFI_EVT_CLASS_FATAL = 3,
MFI_EVT_CLASS_DEAD = 4
};
/*
* Crash dump related defines
*/
@ -364,6 +388,8 @@ enum MR_EVT_ARGS {
MR_EVT_ARGS_GENERIC,
};
#define SGE_BUFFER_SIZE 4096
/*
* define constants for device list query options
*/
@ -394,6 +420,7 @@ enum MR_LD_QUERY_TYPE {
#define MR_EVT_FOREIGN_CFG_IMPORTED 0x00db
#define MR_EVT_LD_OFFLINE 0x00fc
#define MR_EVT_CTRL_HOST_BUS_SCAN_REQUESTED 0x0152
#define MR_EVT_CTRL_PROP_CHANGED 0x012f
enum MR_PD_STATE {
MR_PD_STATE_UNCONFIGURED_GOOD = 0x00,
@ -973,7 +1000,12 @@ struct megasas_ctrl_info {
struct {
#if defined(__BIG_ENDIAN_BITFIELD)
u32 reserved:12;
u32 reserved:7;
u32 useSeqNumJbodFP:1;
u32 supportExtendedSSCSize:1;
u32 supportDiskCacheSettingForSysPDs:1;
u32 supportCPLDUpdate:1;
u32 supportTTYLogCompression:1;
u32 discardCacheDuringLDDelete:1;
u32 supportSecurityonJBOD:1;
u32 supportCacheBypassModes:1;
@ -1013,7 +1045,12 @@ struct megasas_ctrl_info {
u32 supportCacheBypassModes:1;
u32 supportSecurityonJBOD:1;
u32 discardCacheDuringLDDelete:1;
u32 reserved:12;
u32 supportTTYLogCompression:1;
u32 supportCPLDUpdate:1;
u32 supportDiskCacheSettingForSysPDs:1;
u32 supportExtendedSSCSize:1;
u32 useSeqNumJbodFP:1;
u32 reserved:7;
#endif
} adapterOperations3;
@ -1229,7 +1266,9 @@ union megasas_sgl_frame {
typedef union _MFI_CAPABILITIES {
struct {
#if defined(__BIG_ENDIAN_BITFIELD)
u32 reserved:25;
u32 reserved:23;
u32 support_ext_io_size:1;
u32 support_ext_queue_depth:1;
u32 security_protocol_cmds_fw:1;
u32 support_core_affinity:1;
u32 support_ndrive_r1_lb:1;
@ -1245,7 +1284,9 @@ typedef union _MFI_CAPABILITIES {
u32 support_ndrive_r1_lb:1;
u32 support_core_affinity:1;
u32 security_protocol_cmds_fw:1;
u32 reserved:25;
u32 support_ext_queue_depth:1;
u32 support_ext_io_size:1;
u32 reserved:23;
#endif
} mfi_capabilities;
__le32 reg;
@ -1690,6 +1731,7 @@ struct megasas_instance {
u32 crash_dump_drv_support;
u32 crash_dump_app_support;
u32 secure_jbod_support;
bool use_seqnum_jbod_fp; /* Added for PD sequence */
spinlock_t crashdump_lock;
struct megasas_register_set __iomem *reg_set;
@ -1748,6 +1790,7 @@ struct megasas_instance {
u8 UnevenSpanSupport;
u8 supportmax256vd;
u8 allow_fw_scan;
u16 fw_supported_vd_count;
u16 fw_supported_pd_count;
@ -1769,7 +1812,9 @@ struct megasas_instance {
struct msix_entry msixentry[MEGASAS_MAX_MSIX_QUEUES];
struct megasas_irq_context irq_context[MEGASAS_MAX_MSIX_QUEUES];
u64 map_id;
u64 pd_seq_map_id;
struct megasas_cmd *map_update_cmd;
struct megasas_cmd *jbod_seq_cmd;
unsigned long bar;
long reset_flags;
struct mutex reset_mutex;
@ -1780,6 +1825,7 @@ struct megasas_instance {
char mpio;
u16 throttlequeuedepth;
u8 mask_interrupts;
u16 max_chain_frame_sz;
u8 is_imr;
bool dev_handle;
};
@ -1985,6 +2031,9 @@ __le16 get_updated_dev_handle(struct megasas_instance *instance,
void mr_update_load_balance_params(struct MR_DRV_RAID_MAP_ALL *map,
struct LD_LOAD_BALANCE_INFO *lbInfo);
int megasas_get_ctrl_info(struct megasas_instance *instance);
/* PD sequence */
int
megasas_sync_pd_seq_num(struct megasas_instance *instance, bool pend);
int megasas_set_crash_dump_params(struct megasas_instance *instance,
u8 crash_buf_state);
void megasas_free_host_crash_buffer(struct megasas_instance *instance);
@ -2000,5 +2049,6 @@ void __megasas_return_cmd(struct megasas_instance *instance,
void megasas_return_mfi_mpt_pthr(struct megasas_instance *instance,
struct megasas_cmd *cmd_mfi, struct megasas_cmd_fusion *cmd_fusion);
int megasas_cmd_type(struct scsi_cmnd *cmd);
void megasas_setup_jbod_map(struct megasas_instance *instance);
#endif /*LSI_MEGARAID_SAS_H */

View File

@ -135,6 +135,12 @@ static struct pci_device_id megasas_pci_table[] = {
/* Invader */
{PCI_DEVICE(PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_FURY)},
/* Fury */
{PCI_DEVICE(PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_INTRUDER)},
/* Intruder */
{PCI_DEVICE(PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_INTRUDER_24)},
/* Intruder 24 port*/
{PCI_DEVICE(PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_CUTLASS_52)},
{PCI_DEVICE(PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_CUTLASS_53)},
{}
};
@ -260,6 +266,66 @@ megasas_return_cmd(struct megasas_instance *instance, struct megasas_cmd *cmd)
}
static const char *
format_timestamp(uint32_t timestamp)
{
static char buffer[32];
if ((timestamp & 0xff000000) == 0xff000000)
snprintf(buffer, sizeof(buffer), "boot + %us", timestamp &
0x00ffffff);
else
snprintf(buffer, sizeof(buffer), "%us", timestamp);
return buffer;
}
static const char *
format_class(int8_t class)
{
static char buffer[6];
switch (class) {
case MFI_EVT_CLASS_DEBUG:
return "debug";
case MFI_EVT_CLASS_PROGRESS:
return "progress";
case MFI_EVT_CLASS_INFO:
return "info";
case MFI_EVT_CLASS_WARNING:
return "WARN";
case MFI_EVT_CLASS_CRITICAL:
return "CRIT";
case MFI_EVT_CLASS_FATAL:
return "FATAL";
case MFI_EVT_CLASS_DEAD:
return "DEAD";
default:
snprintf(buffer, sizeof(buffer), "%d", class);
return buffer;
}
}
/**
* megasas_decode_evt: Decode FW AEN event and print critical event
* for information.
* @instance: Adapter soft state
*/
static void
megasas_decode_evt(struct megasas_instance *instance)
{
struct megasas_evt_detail *evt_detail = instance->evt_detail;
union megasas_evt_class_locale class_locale;
class_locale.word = le32_to_cpu(evt_detail->cl.word);
if (class_locale.members.class >= MFI_EVT_CLASS_CRITICAL)
dev_info(&instance->pdev->dev, "%d (%s/0x%04x/%s) - %s\n",
le32_to_cpu(evt_detail->seq_num),
format_timestamp(le32_to_cpu(evt_detail->time_stamp)),
(class_locale.members.locale),
format_class(class_locale.members.class),
evt_detail->description);
}
/**
* The following functions are defined for xscale
* (deviceid : 1064R, PERC5) controllers
@ -1659,8 +1725,56 @@ static struct megasas_instance *megasas_lookup_instance(u16 host_no)
return NULL;
}
/*
* megasas_set_dma_alignment - Set DMA alignment for PI enabled VD
*
* @sdev: OS provided scsi device
*
* Returns void
*/
static void megasas_set_dma_alignment(struct scsi_device *sdev)
{
u32 device_id, ld;
struct megasas_instance *instance;
struct fusion_context *fusion;
struct MR_LD_RAID *raid;
struct MR_DRV_RAID_MAP_ALL *local_map_ptr;
instance = megasas_lookup_instance(sdev->host->host_no);
fusion = instance->ctrl_context;
if (!fusion)
return;
if (sdev->channel >= MEGASAS_MAX_PD_CHANNELS) {
device_id = ((sdev->channel % 2) * MEGASAS_MAX_DEV_PER_CHANNEL)
+ sdev->id;
local_map_ptr = fusion->ld_drv_map[(instance->map_id & 1)];
ld = MR_TargetIdToLdGet(device_id, local_map_ptr);
raid = MR_LdRaidGet(ld, local_map_ptr);
if (raid->capability.ldPiMode == MR_PROT_INFO_TYPE_CONTROLLER)
blk_queue_update_dma_alignment(sdev->request_queue, 0x7);
}
}
static int megasas_slave_configure(struct scsi_device *sdev)
{
u16 pd_index = 0;
struct megasas_instance *instance;
instance = megasas_lookup_instance(sdev->host->host_no);
if (instance->allow_fw_scan) {
if (sdev->channel < MEGASAS_MAX_PD_CHANNELS &&
sdev->type == TYPE_DISK) {
pd_index = (sdev->channel * MEGASAS_MAX_DEV_PER_CHANNEL) +
sdev->id;
if (instance->pd_list[pd_index].driveState !=
MR_PD_STATE_SYSTEM)
return -ENXIO;
}
}
megasas_set_dma_alignment(sdev);
/*
* The RAID firmware may require extended timeouts.
*/
@ -1683,8 +1797,8 @@ static int megasas_slave_alloc(struct scsi_device *sdev)
pd_index =
(sdev->channel * MEGASAS_MAX_DEV_PER_CHANNEL) +
sdev->id;
if (instance->pd_list[pd_index].driveState ==
MR_PD_STATE_SYSTEM) {
if ((instance->allow_fw_scan || instance->pd_list[pd_index].driveState ==
MR_PD_STATE_SYSTEM)) {
return 0;
}
return -ENXIO;
@ -1736,10 +1850,7 @@ void megaraid_sas_kill_hba(struct megasas_instance *instance)
msleep(1000);
if ((instance->pdev->device == PCI_DEVICE_ID_LSI_SAS0073SKINNY) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_SAS0071SKINNY) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_FUSION) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_PLASMA) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_INVADER) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_FURY)) {
(instance->ctrl_context)) {
writel(MFI_STOP_ADP, &instance->reg_set->doorbell);
/* Flush */
readl(&instance->reg_set->doorbell);
@ -2506,10 +2617,7 @@ static int megasas_reset_bus_host(struct scsi_cmnd *scmd)
/*
* First wait for all commands to complete
*/
if ((instance->pdev->device == PCI_DEVICE_ID_LSI_FUSION) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_PLASMA) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_INVADER) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_FURY))
if (instance->ctrl_context)
ret = megasas_reset_fusion(scmd->device->host, 1);
else
ret = megasas_generic_reset(scmd);
@ -2837,7 +2945,7 @@ megasas_complete_cmd(struct megasas_instance *instance, struct megasas_cmd *cmd,
struct megasas_header *hdr = &cmd->frame->hdr;
unsigned long flags;
struct fusion_context *fusion = instance->ctrl_context;
u32 opcode;
u32 opcode, status;
/* flag for the retry reset */
cmd->retry_for_fw_reset = 0;
@ -2945,6 +3053,7 @@ megasas_complete_cmd(struct megasas_instance *instance, struct megasas_cmd *cmd,
&& (cmd->frame->dcmd.mbox.b[1] == 1)) {
fusion->fast_path_io = 0;
spin_lock_irqsave(instance->host->host_lock, flags);
instance->map_update_cmd = NULL;
if (cmd->frame->hdr.cmd_status != 0) {
if (cmd->frame->hdr.cmd_status !=
MFI_STAT_NOT_FOUND)
@ -2982,6 +3091,27 @@ megasas_complete_cmd(struct megasas_instance *instance, struct megasas_cmd *cmd,
spin_unlock_irqrestore(&poll_aen_lock, flags);
}
/* FW has an updated PD sequence */
if ((opcode == MR_DCMD_SYSTEM_PD_MAP_GET_INFO) &&
(cmd->frame->dcmd.mbox.b[0] == 1)) {
spin_lock_irqsave(instance->host->host_lock, flags);
status = cmd->frame->hdr.cmd_status;
instance->jbod_seq_cmd = NULL;
megasas_return_cmd(instance, cmd);
if (status == MFI_STAT_OK) {
instance->pd_seq_map_id++;
/* Re-register a pd sync seq num cmd */
if (megasas_sync_pd_seq_num(instance, true))
instance->use_seqnum_jbod_fp = false;
} else
instance->use_seqnum_jbod_fp = false;
spin_unlock_irqrestore(instance->host->host_lock, flags);
break;
}
/*
* See if got an event notification
*/
@ -3348,22 +3478,14 @@ megasas_transition_to_ready(struct megasas_instance *instance, int ocr)
PCI_DEVICE_ID_LSI_SAS0073SKINNY) ||
(instance->pdev->device ==
PCI_DEVICE_ID_LSI_SAS0071SKINNY) ||
(instance->pdev->device ==
PCI_DEVICE_ID_LSI_FUSION) ||
(instance->pdev->device ==
PCI_DEVICE_ID_LSI_PLASMA) ||
(instance->pdev->device ==
PCI_DEVICE_ID_LSI_INVADER) ||
(instance->pdev->device ==
PCI_DEVICE_ID_LSI_FURY)) {
(instance->ctrl_context))
writel(
MFI_INIT_CLEAR_HANDSHAKE|MFI_INIT_HOTPLUG,
&instance->reg_set->doorbell);
} else {
else
writel(
MFI_INIT_CLEAR_HANDSHAKE|MFI_INIT_HOTPLUG,
&instance->reg_set->inbound_doorbell);
}
max_wait = MEGASAS_RESET_WAIT_TIME;
cur_state = MFI_STATE_WAIT_HANDSHAKE;
@ -3374,17 +3496,10 @@ megasas_transition_to_ready(struct megasas_instance *instance, int ocr)
PCI_DEVICE_ID_LSI_SAS0073SKINNY) ||
(instance->pdev->device ==
PCI_DEVICE_ID_LSI_SAS0071SKINNY) ||
(instance->pdev->device ==
PCI_DEVICE_ID_LSI_FUSION) ||
(instance->pdev->device ==
PCI_DEVICE_ID_LSI_PLASMA) ||
(instance->pdev->device ==
PCI_DEVICE_ID_LSI_INVADER) ||
(instance->pdev->device ==
PCI_DEVICE_ID_LSI_FURY)) {
(instance->ctrl_context))
writel(MFI_INIT_HOTPLUG,
&instance->reg_set->doorbell);
} else
else
writel(MFI_INIT_HOTPLUG,
&instance->reg_set->inbound_doorbell);
@ -3401,24 +3516,11 @@ megasas_transition_to_ready(struct megasas_instance *instance, int ocr)
PCI_DEVICE_ID_LSI_SAS0073SKINNY) ||
(instance->pdev->device ==
PCI_DEVICE_ID_LSI_SAS0071SKINNY) ||
(instance->pdev->device
== PCI_DEVICE_ID_LSI_FUSION) ||
(instance->pdev->device
== PCI_DEVICE_ID_LSI_PLASMA) ||
(instance->pdev->device
== PCI_DEVICE_ID_LSI_INVADER) ||
(instance->pdev->device
== PCI_DEVICE_ID_LSI_FURY)) {
(instance->ctrl_context)) {
writel(MFI_RESET_FLAGS,
&instance->reg_set->doorbell);
if ((instance->pdev->device ==
PCI_DEVICE_ID_LSI_FUSION) ||
(instance->pdev->device ==
PCI_DEVICE_ID_LSI_PLASMA) ||
(instance->pdev->device ==
PCI_DEVICE_ID_LSI_INVADER) ||
(instance->pdev->device ==
PCI_DEVICE_ID_LSI_FURY)) {
if (instance->ctrl_context) {
for (i = 0; i < (10 * 1000); i += 20) {
if (readl(
&instance->
@ -3639,11 +3741,7 @@ static int megasas_create_frame_pool(struct megasas_instance *instance)
memset(cmd->frame, 0, total_sz);
cmd->frame->io.context = cpu_to_le32(cmd->index);
cmd->frame->io.pad_0 = 0;
if ((instance->pdev->device != PCI_DEVICE_ID_LSI_FUSION) &&
(instance->pdev->device != PCI_DEVICE_ID_LSI_PLASMA) &&
(instance->pdev->device != PCI_DEVICE_ID_LSI_INVADER) &&
(instance->pdev->device != PCI_DEVICE_ID_LSI_FURY) &&
(reset_devices))
if (!instance->ctrl_context && reset_devices)
cmd->frame->hdr.cmd = MFI_CMD_INVALID;
}
@ -4136,11 +4234,21 @@ megasas_get_ctrl_info(struct megasas_instance *instance)
le32_to_cpus((u32 *)&ctrl_info->adapterOperations2);
le32_to_cpus((u32 *)&ctrl_info->adapterOperations3);
megasas_update_ext_vd_details(instance);
instance->use_seqnum_jbod_fp =
ctrl_info->adapterOperations3.useSeqNumJbodFP;
instance->is_imr = (ctrl_info->memory_size ? 0 : 1);
dev_info(&instance->pdev->dev,
"controller type\t: %s(%dMB)\n",
instance->is_imr ? "iMR" : "MR",
le16_to_cpu(ctrl_info->memory_size));
instance->disableOnlineCtrlReset =
ctrl_info->properties.OnOffProperties.disableOnlineCtrlReset;
dev_info(&instance->pdev->dev, "Online Controller Reset(OCR)\t: %s\n",
instance->disableOnlineCtrlReset ? "Disabled" : "Enabled");
instance->secure_jbod_support =
ctrl_info->adapterOperations3.supportSecurityonJBOD;
dev_info(&instance->pdev->dev, "Secure JBOD support\t: %s\n",
instance->secure_jbod_support ? "Yes" : "No");
}
pci_free_consistent(instance->pdev, sizeof(struct megasas_ctrl_info),
@ -4480,6 +4588,62 @@ megasas_destroy_irqs(struct megasas_instance *instance) {
free_irq(instance->pdev->irq, &instance->irq_context[0]);
}
/**
* megasas_setup_jbod_map - setup jbod map for FP seq_number.
* @instance: Adapter soft state
* @is_probe: Driver probe check
*
* Return 0 on success.
*/
void
megasas_setup_jbod_map(struct megasas_instance *instance)
{
int i;
struct fusion_context *fusion = instance->ctrl_context;
u32 pd_seq_map_sz;
pd_seq_map_sz = sizeof(struct MR_PD_CFG_SEQ_NUM_SYNC) +
(sizeof(struct MR_PD_CFG_SEQ) * (MAX_PHYSICAL_DEVICES - 1));
if (reset_devices || !fusion ||
!instance->ctrl_info->adapterOperations3.useSeqNumJbodFP) {
dev_info(&instance->pdev->dev,
"Jbod map is not supported %s %d\n",
__func__, __LINE__);
instance->use_seqnum_jbod_fp = false;
return;
}
if (fusion->pd_seq_sync[0])
goto skip_alloc;
for (i = 0; i < JBOD_MAPS_COUNT; i++) {
fusion->pd_seq_sync[i] = dma_alloc_coherent
(&instance->pdev->dev, pd_seq_map_sz,
&fusion->pd_seq_phys[i], GFP_KERNEL);
if (!fusion->pd_seq_sync[i]) {
dev_err(&instance->pdev->dev,
"Failed to allocate memory from %s %d\n",
__func__, __LINE__);
if (i == 1) {
dma_free_coherent(&instance->pdev->dev,
pd_seq_map_sz, fusion->pd_seq_sync[0],
fusion->pd_seq_phys[0]);
fusion->pd_seq_sync[0] = NULL;
}
instance->use_seqnum_jbod_fp = false;
return;
}
}
skip_alloc:
if (!megasas_sync_pd_seq_num(instance, false) &&
!megasas_sync_pd_seq_num(instance, true))
instance->use_seqnum_jbod_fp = true;
else
instance->use_seqnum_jbod_fp = false;
}
/**
* megasas_init_fw - Initializes the FW
* @instance: Adapter soft state
@ -4498,6 +4662,9 @@ static int megasas_init_fw(struct megasas_instance *instance)
unsigned long bar_list;
int i, loop, fw_msix_count = 0;
struct IOV_111 *iovPtr;
struct fusion_context *fusion;
fusion = instance->ctrl_context;
/* Find first memory bar */
bar_list = pci_select_bars(instance->pdev, IORESOURCE_MEM);
@ -4523,6 +4690,10 @@ static int megasas_init_fw(struct megasas_instance *instance)
case PCI_DEVICE_ID_LSI_PLASMA:
case PCI_DEVICE_ID_LSI_INVADER:
case PCI_DEVICE_ID_LSI_FURY:
case PCI_DEVICE_ID_LSI_INTRUDER:
case PCI_DEVICE_ID_LSI_INTRUDER_24:
case PCI_DEVICE_ID_LSI_CUTLASS_52:
case PCI_DEVICE_ID_LSI_CUTLASS_53:
instance->instancet = &megasas_instance_template_fusion;
break;
case PCI_DEVICE_ID_LSI_SAS1078R:
@ -4541,6 +4712,7 @@ static int megasas_init_fw(struct megasas_instance *instance)
case PCI_DEVICE_ID_DELL_PERC5:
default:
instance->instancet = &megasas_instance_template_xscale;
instance->allow_fw_scan = 1;
break;
}
@ -4575,37 +4747,32 @@ static int megasas_init_fw(struct megasas_instance *instance)
scratch_pad_2 = readl
(&instance->reg_set->outbound_scratch_pad_2);
/* Check max MSI-X vectors */
if ((instance->pdev->device == PCI_DEVICE_ID_LSI_FUSION) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_PLASMA)) {
instance->msix_vectors = (scratch_pad_2
& MR_MAX_REPLY_QUEUES_OFFSET) + 1;
fw_msix_count = instance->msix_vectors;
if (msix_vectors)
instance->msix_vectors =
min(msix_vectors,
instance->msix_vectors);
} else if ((instance->pdev->device == PCI_DEVICE_ID_LSI_INVADER)
|| (instance->pdev->device == PCI_DEVICE_ID_LSI_FURY)) {
/* Invader/Fury supports more than 8 MSI-X */
instance->msix_vectors = ((scratch_pad_2
& MR_MAX_REPLY_QUEUES_EXT_OFFSET)
>> MR_MAX_REPLY_QUEUES_EXT_OFFSET_SHIFT) + 1;
fw_msix_count = instance->msix_vectors;
/* Save 1-15 reply post index address to local memory
* Index 0 is already saved from reg offset
* MPI2_REPLY_POST_HOST_INDEX_OFFSET
*/
for (loop = 1; loop < MR_MAX_MSIX_REG_ARRAY; loop++) {
instance->reply_post_host_index_addr[loop] =
(u32 __iomem *)
((u8 __iomem *)instance->reg_set +
MPI2_SUP_REPLY_POST_HOST_INDEX_OFFSET
+ (loop * 0x10));
if (fusion) {
if (fusion->adapter_type == THUNDERBOLT_SERIES) { /* Thunderbolt Series*/
instance->msix_vectors = (scratch_pad_2
& MR_MAX_REPLY_QUEUES_OFFSET) + 1;
fw_msix_count = instance->msix_vectors;
} else { /* Invader series supports more than 8 MSI-x vectors*/
instance->msix_vectors = ((scratch_pad_2
& MR_MAX_REPLY_QUEUES_EXT_OFFSET)
>> MR_MAX_REPLY_QUEUES_EXT_OFFSET_SHIFT) + 1;
fw_msix_count = instance->msix_vectors;
/* Save 1-15 reply post index address to local memory
* Index 0 is already saved from reg offset
* MPI2_REPLY_POST_HOST_INDEX_OFFSET
*/
for (loop = 1; loop < MR_MAX_MSIX_REG_ARRAY; loop++) {
instance->reply_post_host_index_addr[loop] =
(u32 __iomem *)
((u8 __iomem *)instance->reg_set +
MPI2_SUP_REPLY_POST_HOST_INDEX_OFFSET
+ (loop * 0x10));
}
}
if (msix_vectors)
instance->msix_vectors = min(msix_vectors,
instance->msix_vectors);
} else
} else /* MFI adapters */
instance->msix_vectors = 1;
/* Don't bother allocating more MSI-X vectors than cpus */
instance->msix_vectors = min(instance->msix_vectors,
@ -4626,6 +4793,9 @@ static int megasas_init_fw(struct megasas_instance *instance)
"current msix/online cpus\t: (%d/%d)\n",
instance->msix_vectors, (unsigned int)num_online_cpus());
tasklet_init(&instance->isr_tasklet, instance->instancet->tasklet,
(unsigned long)instance);
if (instance->msix_vectors ?
megasas_setup_irqs_msix(instance, 1) :
megasas_setup_irqs_ioapic(instance))
@ -4646,13 +4816,13 @@ static int megasas_init_fw(struct megasas_instance *instance)
if (instance->instancet->init_adapter(instance))
goto fail_init_adapter;
tasklet_init(&instance->isr_tasklet, instance->instancet->tasklet,
(unsigned long)instance);
instance->instancet->enable_intr(instance);
dev_err(&instance->pdev->dev, "INIT adapter done\n");
megasas_setup_jbod_map(instance);
/** for passthrough
* the following function will get the PD LIST.
*/
@ -4686,8 +4856,6 @@ static int megasas_init_fw(struct megasas_instance *instance)
tmp_sectors = min_t(u32, max_sectors_1, max_sectors_2);
instance->disableOnlineCtrlReset =
ctrl_info->properties.OnOffProperties.disableOnlineCtrlReset;
instance->mpio = ctrl_info->adapterOperations2.mpio;
instance->UnevenSpanSupport =
ctrl_info->adapterOperations2.supportUnevenSpans;
@ -4700,18 +4868,22 @@ static int megasas_init_fw(struct megasas_instance *instance)
}
if (ctrl_info->host_interface.SRIOV) {
if (!ctrl_info->adapterOperations2.activePassive)
instance->PlasmaFW111 = 1;
instance->requestorId = ctrl_info->iov.requestorId;
if (instance->pdev->device == PCI_DEVICE_ID_LSI_PLASMA) {
if (!ctrl_info->adapterOperations2.activePassive)
instance->PlasmaFW111 = 1;
if (!instance->PlasmaFW111)
instance->requestorId =
ctrl_info->iov.requestorId;
else {
iovPtr = (struct IOV_111 *)((unsigned char *)ctrl_info + IOV_111_OFFSET);
instance->requestorId = iovPtr->requestorId;
dev_info(&instance->pdev->dev, "SR-IOV: firmware type: %s\n",
instance->PlasmaFW111 ? "1.11" : "new");
if (instance->PlasmaFW111) {
iovPtr = (struct IOV_111 *)
((unsigned char *)ctrl_info + IOV_111_OFFSET);
instance->requestorId = iovPtr->requestorId;
}
}
dev_warn(&instance->pdev->dev, "I am VF "
"requestorId %d\n", instance->requestorId);
dev_info(&instance->pdev->dev, "SRIOV: VF requestorId %d\n",
instance->requestorId);
}
instance->crash_dump_fw_support =
@ -4732,8 +4904,6 @@ static int megasas_init_fw(struct megasas_instance *instance)
instance->crash_dump_buf = NULL;
}
instance->secure_jbod_support =
ctrl_info->adapterOperations3.supportSecurityonJBOD;
dev_info(&instance->pdev->dev,
"pci id\t\t: (0x%04x)/(0x%04x)/(0x%04x)/(0x%04x)\n",
@ -4743,16 +4913,14 @@ static int megasas_init_fw(struct megasas_instance *instance)
le16_to_cpu(ctrl_info->pci.sub_device_id));
dev_info(&instance->pdev->dev, "unevenspan support : %s\n",
instance->UnevenSpanSupport ? "yes" : "no");
dev_info(&instance->pdev->dev, "disable ocr : %s\n",
instance->disableOnlineCtrlReset ? "yes" : "no");
dev_info(&instance->pdev->dev, "firmware crash dump : %s\n",
instance->crash_dump_drv_support ? "yes" : "no");
dev_info(&instance->pdev->dev, "secure jbod : %s\n",
instance->secure_jbod_support ? "yes" : "no");
dev_info(&instance->pdev->dev, "jbod sync map : %s\n",
instance->use_seqnum_jbod_fp ? "yes" : "no");
instance->max_sectors_per_req = instance->max_num_sge *
PAGE_SIZE / 512;
SGE_BUFFER_SIZE / 512;
if (tmp_sectors && (instance->max_sectors_per_req > tmp_sectors))
instance->max_sectors_per_req = tmp_sectors;
@ -5049,7 +5217,6 @@ static int megasas_start_aen(struct megasas_instance *instance)
static int megasas_io_attach(struct megasas_instance *instance)
{
struct Scsi_Host *host = instance->host;
u32 error;
/*
* Export parameters required by SCSI mid-layer
@ -5092,20 +5259,10 @@ static int megasas_io_attach(struct megasas_instance *instance)
host->max_cmd_len = 16;
/* Fusion only supports host reset */
if ((instance->pdev->device == PCI_DEVICE_ID_LSI_FUSION) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_PLASMA) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_INVADER) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_FURY)) {
if (instance->ctrl_context) {
host->hostt->eh_device_reset_handler = NULL;
host->hostt->eh_bus_reset_handler = NULL;
}
error = scsi_init_shared_tag_map(host, host->can_queue);
if (error) {
dev_err(&instance->pdev->dev,
"Failed to shared tag from %s %d\n",
__func__, __LINE__);
return -ENODEV;
}
/*
* Notify the mid-layer about the new controller
@ -5218,6 +5375,10 @@ static int megasas_probe_one(struct pci_dev *pdev,
case PCI_DEVICE_ID_LSI_PLASMA:
case PCI_DEVICE_ID_LSI_INVADER:
case PCI_DEVICE_ID_LSI_FURY:
case PCI_DEVICE_ID_LSI_INTRUDER:
case PCI_DEVICE_ID_LSI_INTRUDER_24:
case PCI_DEVICE_ID_LSI_CUTLASS_52:
case PCI_DEVICE_ID_LSI_CUTLASS_53:
{
instance->ctrl_context_pages =
get_order(sizeof(struct fusion_context));
@ -5231,6 +5392,11 @@ static int megasas_probe_one(struct pci_dev *pdev,
fusion = instance->ctrl_context;
memset(fusion, 0,
((1 << PAGE_SHIFT) << instance->ctrl_context_pages));
if ((instance->pdev->device == PCI_DEVICE_ID_LSI_FUSION) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_PLASMA))
fusion->adapter_type = THUNDERBOLT_SERIES;
else
fusion->adapter_type = INVADER_SERIES;
}
break;
default: /* For all other supported controllers */
@ -5333,10 +5499,7 @@ static int megasas_probe_one(struct pci_dev *pdev,
instance->disableOnlineCtrlReset = 1;
instance->UnevenSpanSupport = 0;
if ((instance->pdev->device == PCI_DEVICE_ID_LSI_FUSION) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_PLASMA) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_INVADER) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_FURY)) {
if (instance->ctrl_context) {
INIT_WORK(&instance->work_init, megasas_fusion_ocr_wq);
INIT_WORK(&instance->crash_init, megasas_fusion_crash_dump_wq);
} else
@ -5416,10 +5579,7 @@ static int megasas_probe_one(struct pci_dev *pdev,
instance->instancet->disable_intr(instance);
megasas_destroy_irqs(instance);
if ((instance->pdev->device == PCI_DEVICE_ID_LSI_FUSION) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_PLASMA) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_INVADER) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_FURY))
if (instance->ctrl_context)
megasas_release_fusion(instance);
else
megasas_release_mfi(instance);
@ -5506,10 +5666,14 @@ static void megasas_shutdown_controller(struct megasas_instance *instance,
if (instance->aen_cmd)
megasas_issue_blocked_abort_cmd(instance,
instance->aen_cmd, 30);
instance->aen_cmd, MEGASAS_BLOCKED_CMD_TIMEOUT);
if (instance->map_update_cmd)
megasas_issue_blocked_abort_cmd(instance,
instance->map_update_cmd, 30);
instance->map_update_cmd, MEGASAS_BLOCKED_CMD_TIMEOUT);
if (instance->jbod_seq_cmd)
megasas_issue_blocked_abort_cmd(instance,
instance->jbod_seq_cmd, MEGASAS_BLOCKED_CMD_TIMEOUT);
dcmd = &cmd->frame->dcmd;
memset(dcmd->mbox.b, 0, MFI_MBOX_SIZE);
@ -5628,12 +5792,7 @@ megasas_resume(struct pci_dev *pdev)
instance->msix_vectors))
goto fail_reenable_msix;
switch (instance->pdev->device) {
case PCI_DEVICE_ID_LSI_FUSION:
case PCI_DEVICE_ID_LSI_PLASMA:
case PCI_DEVICE_ID_LSI_INVADER:
case PCI_DEVICE_ID_LSI_FURY:
{
if (instance->ctrl_context) {
megasas_reset_reply_desc(instance);
if (megasas_ioc_init_fusion(instance)) {
megasas_free_cmds(instance);
@ -5642,14 +5801,11 @@ megasas_resume(struct pci_dev *pdev)
}
if (!megasas_get_map_info(instance))
megasas_sync_map_info(instance);
}
break;
default:
} else {
*instance->producer = 0;
*instance->consumer = 0;
if (megasas_issue_init_mfi(instance))
goto fail_init_mfi;
break;
}
tasklet_init(&instance->isr_tasklet, instance->instancet->tasklet,
@ -5674,6 +5830,7 @@ megasas_resume(struct pci_dev *pdev)
}
instance->instancet->enable_intr(instance);
megasas_setup_jbod_map(instance);
instance->unload = 0;
/*
@ -5721,6 +5878,7 @@ static void megasas_detach_one(struct pci_dev *pdev)
struct Scsi_Host *host;
struct megasas_instance *instance;
struct fusion_context *fusion;
u32 pd_seq_map_sz;
instance = pci_get_drvdata(pdev);
instance->unload = 1;
@ -5769,12 +5927,11 @@ static void megasas_detach_one(struct pci_dev *pdev)
if (instance->msix_vectors)
pci_disable_msix(instance->pdev);
switch (instance->pdev->device) {
case PCI_DEVICE_ID_LSI_FUSION:
case PCI_DEVICE_ID_LSI_PLASMA:
case PCI_DEVICE_ID_LSI_INVADER:
case PCI_DEVICE_ID_LSI_FURY:
if (instance->ctrl_context) {
megasas_release_fusion(instance);
pd_seq_map_sz = sizeof(struct MR_PD_CFG_SEQ_NUM_SYNC) +
(sizeof(struct MR_PD_CFG_SEQ) *
(MAX_PHYSICAL_DEVICES - 1));
for (i = 0; i < 2 ; i++) {
if (fusion->ld_map[i])
dma_free_coherent(&instance->pdev->dev,
@ -5784,11 +5941,15 @@ static void megasas_detach_one(struct pci_dev *pdev)
if (fusion->ld_drv_map[i])
free_pages((ulong)fusion->ld_drv_map[i],
fusion->drv_map_pages);
if (fusion->pd_seq_sync)
dma_free_coherent(&instance->pdev->dev,
pd_seq_map_sz,
fusion->pd_seq_sync[i],
fusion->pd_seq_phys[i]);
}
free_pages((ulong)instance->ctrl_context,
instance->ctrl_context_pages);
break;
default:
} else {
megasas_release_mfi(instance);
pci_free_consistent(pdev, sizeof(u32),
instance->producer,
@ -5796,7 +5957,6 @@ static void megasas_detach_one(struct pci_dev *pdev)
pci_free_consistent(pdev, sizeof(u32),
instance->consumer,
instance->consumer_h);
break;
}
kfree(instance->ctrl_info);
@ -6316,6 +6476,9 @@ static int megasas_mgmt_compat_ioctl_fw(struct file *file, unsigned long arg)
int i;
int error = 0;
compat_uptr_t ptr;
unsigned long local_raw_ptr;
u32 local_sense_off;
u32 local_sense_len;
if (clear_user(ioc, sizeof(*ioc)))
return -EFAULT;
@ -6333,9 +6496,15 @@ static int megasas_mgmt_compat_ioctl_fw(struct file *file, unsigned long arg)
* sense_len is not null, so prepare the 64bit value under
* the same condition.
*/
if (ioc->sense_len) {
if (get_user(local_raw_ptr, ioc->frame.raw) ||
get_user(local_sense_off, &ioc->sense_off) ||
get_user(local_sense_len, &ioc->sense_len))
return -EFAULT;
if (local_sense_len) {
void __user **sense_ioc_ptr =
(void __user **)(ioc->frame.raw + ioc->sense_off);
(void __user **)((u8*)local_raw_ptr + local_sense_off);
compat_uptr_t *sense_cioc_ptr =
(compat_uptr_t *)(cioc->frame.raw + cioc->sense_off);
if (get_user(ptr, sense_cioc_ptr) ||
@ -6504,6 +6673,7 @@ megasas_aen_polling(struct work_struct *work)
instance->ev = NULL;
host = instance->host;
if (instance->evt_detail) {
megasas_decode_evt(instance);
switch (le32_to_cpu(instance->evt_detail->code)) {
case MR_EVT_PD_INSERTED:
@ -6564,8 +6734,7 @@ megasas_aen_polling(struct work_struct *work)
case MR_EVT_CFG_CLEARED:
case MR_EVT_LD_DELETED:
if (!instance->requestorId ||
(instance->requestorId &&
megasas_get_ld_vf_affiliation(instance, 0))) {
megasas_get_ld_vf_affiliation(instance, 0)) {
if (megasas_ld_list_query(instance,
MR_LD_QUERY_TYPE_EXPOSED_TO_HOST))
megasas_get_ld_list(instance);
@ -6596,8 +6765,7 @@ megasas_aen_polling(struct work_struct *work)
break;
case MR_EVT_LD_CREATED:
if (!instance->requestorId ||
(instance->requestorId &&
megasas_get_ld_vf_affiliation(instance, 0))) {
megasas_get_ld_vf_affiliation(instance, 0)) {
if (megasas_ld_list_query(instance,
MR_LD_QUERY_TYPE_EXPOSED_TO_HOST))
megasas_get_ld_list(instance);
@ -6627,6 +6795,9 @@ megasas_aen_polling(struct work_struct *work)
case MR_EVT_LD_STATE_CHANGE:
doscan = 1;
break;
case MR_EVT_CTRL_PROP_CHANGED:
megasas_get_ctrl_info(instance);
break;
default:
doscan = 0;
break;
@ -6663,8 +6834,7 @@ megasas_aen_polling(struct work_struct *work)
}
if (!instance->requestorId ||
(instance->requestorId &&
megasas_get_ld_vf_affiliation(instance, 0))) {
megasas_get_ld_vf_affiliation(instance, 0)) {
if (megasas_ld_list_query(instance,
MR_LD_QUERY_TYPE_EXPOSED_TO_HOST))
megasas_get_ld_list(instance);

View File

@ -741,14 +741,12 @@ static u8 mr_spanset_get_phy_params(struct megasas_instance *instance, u32 ld,
u8 physArm, span;
u64 row;
u8 retval = TRUE;
u8 do_invader = 0;
u64 *pdBlock = &io_info->pdBlock;
__le16 *pDevHandle = &io_info->devHandle;
u32 logArm, rowMod, armQ, arm;
struct fusion_context *fusion;
if ((instance->pdev->device == PCI_DEVICE_ID_LSI_INVADER ||
instance->pdev->device == PCI_DEVICE_ID_LSI_FURY))
do_invader = 1;
fusion = instance->ctrl_context;
/*Get row and span from io_info for Uneven Span IO.*/
row = io_info->start_row;
@ -779,7 +777,8 @@ static u8 mr_spanset_get_phy_params(struct megasas_instance *instance, u32 ld,
else {
*pDevHandle = cpu_to_le16(MR_PD_INVALID);
if ((raid->level >= 5) &&
(!do_invader || (do_invader &&
((fusion->adapter_type == THUNDERBOLT_SERIES) ||
((fusion->adapter_type == INVADER_SERIES) &&
(raid->regTypeReqOnRead != REGION_TYPE_UNUSED))))
pRAID_Context->regLockFlags = REGION_TYPE_EXCLUSIVE;
else if (raid->level == 1) {
@ -823,13 +822,12 @@ u8 MR_GetPhyParams(struct megasas_instance *instance, u32 ld, u64 stripRow,
u8 physArm, span;
u64 row;
u8 retval = TRUE;
u8 do_invader = 0;
u64 *pdBlock = &io_info->pdBlock;
__le16 *pDevHandle = &io_info->devHandle;
struct fusion_context *fusion;
fusion = instance->ctrl_context;
if ((instance->pdev->device == PCI_DEVICE_ID_LSI_INVADER ||
instance->pdev->device == PCI_DEVICE_ID_LSI_FURY))
do_invader = 1;
row = mega_div64_32(stripRow, raid->rowDataSize);
@ -875,7 +873,8 @@ u8 MR_GetPhyParams(struct megasas_instance *instance, u32 ld, u64 stripRow,
/* set dev handle as invalid. */
*pDevHandle = cpu_to_le16(MR_PD_INVALID);
if ((raid->level >= 5) &&
(!do_invader || (do_invader &&
((fusion->adapter_type == THUNDERBOLT_SERIES) ||
((fusion->adapter_type == INVADER_SERIES) &&
(raid->regTypeReqOnRead != REGION_TYPE_UNUSED))))
pRAID_Context->regLockFlags = REGION_TYPE_EXCLUSIVE;
else if (raid->level == 1) {
@ -909,6 +908,7 @@ MR_BuildRaidContext(struct megasas_instance *instance,
struct RAID_CONTEXT *pRAID_Context,
struct MR_DRV_RAID_MAP_ALL *map, u8 **raidLUN)
{
struct fusion_context *fusion;
struct MR_LD_RAID *raid;
u32 ld, stripSize, stripe_mask;
u64 endLba, endStrip, endRow, start_row, start_strip;
@ -929,6 +929,7 @@ MR_BuildRaidContext(struct megasas_instance *instance,
isRead = io_info->isRead;
io_info->IoforUnevenSpan = 0;
io_info->start_span = SPAN_INVALID;
fusion = instance->ctrl_context;
ld = MR_TargetIdToLdGet(ldTgtId, map);
raid = MR_LdRaidGet(ld, map);
@ -1092,8 +1093,7 @@ MR_BuildRaidContext(struct megasas_instance *instance,
cpu_to_le16(raid->fpIoTimeoutForLd ?
raid->fpIoTimeoutForLd :
map->raidMap.fpPdIoTimeoutSec);
if ((instance->pdev->device == PCI_DEVICE_ID_LSI_INVADER) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_FURY))
if (fusion->adapter_type == INVADER_SERIES)
pRAID_Context->regLockFlags = (isRead) ?
raid->regTypeReqOnRead : raid->regTypeReqOnWrite;
else
@ -1198,10 +1198,6 @@ void mr_update_span_set(struct MR_DRV_RAID_MAP_ALL *map,
span_row_width +=
MR_LdSpanPtrGet
(ld, count, map)->spanRowDataSize;
printk(KERN_INFO "megasas:"
"span %x rowDataSize %x\n",
count, MR_LdSpanPtrGet
(ld, count, map)->spanRowDataSize);
}
}

View File

@ -316,26 +316,23 @@ static int megasas_create_frame_pool_fusion(struct megasas_instance *instance)
u32 max_cmd;
struct fusion_context *fusion;
struct megasas_cmd_fusion *cmd;
u32 total_sz_chain_frame;
fusion = instance->ctrl_context;
max_cmd = instance->max_fw_cmds;
total_sz_chain_frame = MEGASAS_MAX_SZ_CHAIN_FRAME;
/*
* Use DMA pool facility provided by PCI layer
*/
fusion->sg_dma_pool = pci_pool_create("megasas sg pool fusion",
instance->pdev,
total_sz_chain_frame, 4,
0);
fusion->sg_dma_pool = pci_pool_create("sg_pool_fusion", instance->pdev,
instance->max_chain_frame_sz,
4, 0);
if (!fusion->sg_dma_pool) {
dev_printk(KERN_DEBUG, &instance->pdev->dev, "failed to setup request pool fusion\n");
return -ENOMEM;
}
fusion->sense_dma_pool = pci_pool_create("megasas sense pool fusion",
fusion->sense_dma_pool = pci_pool_create("sense pool fusion",
instance->pdev,
SCSI_SENSE_BUFFERSIZE, 64, 0);
@ -605,6 +602,7 @@ megasas_ioc_init_fusion(struct megasas_instance *instance)
int i;
struct megasas_header *frame_hdr;
const char *sys_info;
MFI_CAPABILITIES *drv_ops;
fusion = instance->ctrl_context;
@ -652,20 +650,21 @@ megasas_ioc_init_fusion(struct megasas_instance *instance)
init_frame->cmd = MFI_CMD_INIT;
init_frame->cmd_status = 0xFF;
drv_ops = (MFI_CAPABILITIES *) &(init_frame->driver_operations);
/* driver support Extended MSIX */
if ((instance->pdev->device == PCI_DEVICE_ID_LSI_INVADER) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_FURY))
init_frame->driver_operations.
mfi_capabilities.support_additional_msix = 1;
if (fusion->adapter_type == INVADER_SERIES)
drv_ops->mfi_capabilities.support_additional_msix = 1;
/* driver supports HA / Remote LUN over Fast Path interface */
init_frame->driver_operations.mfi_capabilities.support_fp_remote_lun
= 1;
init_frame->driver_operations.mfi_capabilities.support_max_255lds
= 1;
init_frame->driver_operations.mfi_capabilities.support_ndrive_r1_lb
= 1;
init_frame->driver_operations.mfi_capabilities.security_protocol_cmds_fw
= 1;
drv_ops->mfi_capabilities.support_fp_remote_lun = 1;
drv_ops->mfi_capabilities.support_max_255lds = 1;
drv_ops->mfi_capabilities.support_ndrive_r1_lb = 1;
drv_ops->mfi_capabilities.security_protocol_cmds_fw = 1;
if (instance->max_chain_frame_sz > MEGASAS_CHAIN_FRAME_SZ_MIN)
drv_ops->mfi_capabilities.support_ext_io_size = 1;
/* Convert capability to LE32 */
cpu_to_le32s((u32 *)&init_frame->driver_operations.mfi_capabilities);
@ -726,6 +725,83 @@ megasas_ioc_init_fusion(struct megasas_instance *instance)
return ret;
}
/**
* megasas_sync_pd_seq_num - JBOD SEQ MAP
* @instance: Adapter soft state
* @pend: set to 1, if it is pended jbod map.
*
* Issue Jbod map to the firmware. If it is pended command,
* issue command and return. If it is first instance of jbod map
* issue and receive command.
*/
int
megasas_sync_pd_seq_num(struct megasas_instance *instance, bool pend) {
int ret = 0;
u32 pd_seq_map_sz;
struct megasas_cmd *cmd;
struct megasas_dcmd_frame *dcmd;
struct fusion_context *fusion = instance->ctrl_context;
struct MR_PD_CFG_SEQ_NUM_SYNC *pd_sync;
dma_addr_t pd_seq_h;
pd_sync = (void *)fusion->pd_seq_sync[(instance->pd_seq_map_id & 1)];
pd_seq_h = fusion->pd_seq_phys[(instance->pd_seq_map_id & 1)];
pd_seq_map_sz = sizeof(struct MR_PD_CFG_SEQ_NUM_SYNC) +
(sizeof(struct MR_PD_CFG_SEQ) *
(MAX_PHYSICAL_DEVICES - 1));
cmd = megasas_get_cmd(instance);
if (!cmd) {
dev_err(&instance->pdev->dev,
"Could not get mfi cmd. Fail from %s %d\n",
__func__, __LINE__);
return -ENOMEM;
}
dcmd = &cmd->frame->dcmd;
memset(pd_sync, 0, pd_seq_map_sz);
memset(dcmd->mbox.b, 0, MFI_MBOX_SIZE);
dcmd->cmd = MFI_CMD_DCMD;
dcmd->cmd_status = 0xFF;
dcmd->sge_count = 1;
dcmd->timeout = 0;
dcmd->pad_0 = 0;
dcmd->data_xfer_len = cpu_to_le32(pd_seq_map_sz);
dcmd->opcode = cpu_to_le32(MR_DCMD_SYSTEM_PD_MAP_GET_INFO);
dcmd->sgl.sge32[0].phys_addr = cpu_to_le32(pd_seq_h);
dcmd->sgl.sge32[0].length = cpu_to_le32(pd_seq_map_sz);
if (pend) {
dcmd->mbox.b[0] = MEGASAS_DCMD_MBOX_PEND_FLAG;
dcmd->flags = cpu_to_le16(MFI_FRAME_DIR_WRITE);
instance->jbod_seq_cmd = cmd;
instance->instancet->issue_dcmd(instance, cmd);
return 0;
}
dcmd->flags = cpu_to_le16(MFI_FRAME_DIR_READ);
/* Below code is only for non pended DCMD */
if (instance->ctrl_context && !instance->mask_interrupts)
ret = megasas_issue_blocked_cmd(instance, cmd, 60);
else
ret = megasas_issue_polled(instance, cmd);
if (le32_to_cpu(pd_sync->count) > MAX_PHYSICAL_DEVICES) {
dev_warn(&instance->pdev->dev,
"driver supports max %d JBOD, but FW reports %d\n",
MAX_PHYSICAL_DEVICES, le32_to_cpu(pd_sync->count));
ret = -EINVAL;
}
if (!ret)
instance->pd_seq_map_id++;
megasas_return_cmd(instance, cmd);
return ret;
}
/*
* megasas_get_ld_map_info - Returns FW's ld_map structure
* @instance: Adapter soft state
@ -961,6 +1037,18 @@ megasas_display_intel_branding(struct megasas_instance *instance)
break;
}
break;
case PCI_DEVICE_ID_LSI_CUTLASS_52:
case PCI_DEVICE_ID_LSI_CUTLASS_53:
switch (instance->pdev->subsystem_device) {
case MEGARAID_INTEL_RMS3BC160_SSDID:
dev_info(&instance->pdev->dev, "scsi host %d: %s\n",
instance->host->host_no,
MEGARAID_INTEL_RMS3BC160_BRANDING);
break;
default:
break;
}
break;
default:
break;
}
@ -977,7 +1065,7 @@ megasas_init_adapter_fusion(struct megasas_instance *instance)
{
struct megasas_register_set __iomem *reg_set;
struct fusion_context *fusion;
u32 max_cmd;
u32 max_cmd, scratch_pad_2;
int i = 0, count;
fusion = instance->ctrl_context;
@ -1016,15 +1104,40 @@ megasas_init_adapter_fusion(struct megasas_instance *instance)
(MEGA_MPI2_RAID_DEFAULT_IO_FRAME_SIZE *
(max_cmd + 1)); /* Extra 1 for SMID 0 */
scratch_pad_2 = readl(&instance->reg_set->outbound_scratch_pad_2);
/* If scratch_pad_2 & MEGASAS_MAX_CHAIN_SIZE_UNITS_MASK is set,
* Firmware support extended IO chain frame which is 4 times more than
* legacy Firmware.
* Legacy Firmware - Frame size is (8 * 128) = 1K
* 1M IO Firmware - Frame size is (8 * 128 * 4) = 4K
*/
if (scratch_pad_2 & MEGASAS_MAX_CHAIN_SIZE_UNITS_MASK)
instance->max_chain_frame_sz =
((scratch_pad_2 & MEGASAS_MAX_CHAIN_SIZE_MASK) >>
MEGASAS_MAX_CHAIN_SHIFT) * MEGASAS_1MB_IO;
else
instance->max_chain_frame_sz =
((scratch_pad_2 & MEGASAS_MAX_CHAIN_SIZE_MASK) >>
MEGASAS_MAX_CHAIN_SHIFT) * MEGASAS_256K_IO;
if (instance->max_chain_frame_sz < MEGASAS_CHAIN_FRAME_SZ_MIN) {
dev_warn(&instance->pdev->dev, "frame size %d invalid, fall back to legacy max frame size %d\n",
instance->max_chain_frame_sz,
MEGASAS_CHAIN_FRAME_SZ_MIN);
instance->max_chain_frame_sz = MEGASAS_CHAIN_FRAME_SZ_MIN;
}
fusion->max_sge_in_main_msg =
(MEGA_MPI2_RAID_DEFAULT_IO_FRAME_SIZE -
offsetof(struct MPI2_RAID_SCSI_IO_REQUEST, SGL))/16;
(MEGA_MPI2_RAID_DEFAULT_IO_FRAME_SIZE
- offsetof(struct MPI2_RAID_SCSI_IO_REQUEST, SGL))/16;
fusion->max_sge_in_chain =
MEGASAS_MAX_SZ_CHAIN_FRAME / sizeof(union MPI2_SGE_IO_UNION);
instance->max_chain_frame_sz
/ sizeof(union MPI2_SGE_IO_UNION);
instance->max_num_sge = rounddown_pow_of_two(
fusion->max_sge_in_main_msg + fusion->max_sge_in_chain - 2);
instance->max_num_sge =
rounddown_pow_of_two(fusion->max_sge_in_main_msg
+ fusion->max_sge_in_chain - 2);
/* Used for pass thru MFI frame (DCMD) */
fusion->chain_offset_mfi_pthru =
@ -1186,8 +1299,7 @@ megasas_make_sgl_fusion(struct megasas_instance *instance,
fusion = instance->ctrl_context;
if ((instance->pdev->device == PCI_DEVICE_ID_LSI_INVADER) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_FURY)) {
if (fusion->adapter_type == INVADER_SERIES) {
struct MPI25_IEEE_SGE_CHAIN64 *sgl_ptr_end = sgl_ptr;
sgl_ptr_end += fusion->max_sge_in_main_msg - 1;
sgl_ptr_end->Flags = 0;
@ -1204,11 +1316,9 @@ megasas_make_sgl_fusion(struct megasas_instance *instance,
sgl_ptr->Length = cpu_to_le32(sg_dma_len(os_sgl));
sgl_ptr->Address = cpu_to_le64(sg_dma_address(os_sgl));
sgl_ptr->Flags = 0;
if ((instance->pdev->device == PCI_DEVICE_ID_LSI_INVADER) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_FURY)) {
if (fusion->adapter_type == INVADER_SERIES)
if (i == sge_count - 1)
sgl_ptr->Flags = IEEE_SGE_FLAGS_END_OF_LIST;
}
sgl_ptr++;
sg_processed = i + 1;
@ -1217,10 +1327,7 @@ megasas_make_sgl_fusion(struct megasas_instance *instance,
(sge_count > fusion->max_sge_in_main_msg)) {
struct MPI25_IEEE_SGE_CHAIN64 *sg_chain;
if ((instance->pdev->device ==
PCI_DEVICE_ID_LSI_INVADER) ||
(instance->pdev->device ==
PCI_DEVICE_ID_LSI_FURY)) {
if (fusion->adapter_type == INVADER_SERIES) {
if ((le16_to_cpu(cmd->io_request->IoFlags) &
MPI25_SAS_DEVICE0_FLAGS_ENABLED_FAST_PATH) !=
MPI25_SAS_DEVICE0_FLAGS_ENABLED_FAST_PATH)
@ -1236,10 +1343,7 @@ megasas_make_sgl_fusion(struct megasas_instance *instance,
sg_chain = sgl_ptr;
/* Prepare chain element */
sg_chain->NextChainOffset = 0;
if ((instance->pdev->device ==
PCI_DEVICE_ID_LSI_INVADER) ||
(instance->pdev->device ==
PCI_DEVICE_ID_LSI_FURY))
if (fusion->adapter_type == INVADER_SERIES)
sg_chain->Flags = IEEE_SGE_FLAGS_CHAIN_ELEMENT;
else
sg_chain->Flags =
@ -1250,7 +1354,7 @@ megasas_make_sgl_fusion(struct megasas_instance *instance,
sgl_ptr =
(struct MPI25_IEEE_SGE_CHAIN64 *)cmd->sg_frame;
memset(sgl_ptr, 0, MEGASAS_MAX_SZ_CHAIN_FRAME);
memset(sgl_ptr, 0, instance->max_chain_frame_sz);
}
}
@ -1556,8 +1660,7 @@ megasas_build_ldio_fusion(struct megasas_instance *instance,
cmd->request_desc->SCSIIO.RequestFlags =
(MPI2_REQ_DESCRIPT_FLAGS_HIGH_PRIORITY
<< MEGASAS_REQ_DESCRIPT_FLAGS_TYPE_SHIFT);
if ((instance->pdev->device == PCI_DEVICE_ID_LSI_INVADER) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_FURY)) {
if (fusion->adapter_type == INVADER_SERIES) {
if (io_request->RaidContext.regLockFlags ==
REGION_TYPE_UNUSED)
cmd->request_desc->SCSIIO.RequestFlags =
@ -1582,7 +1685,7 @@ megasas_build_ldio_fusion(struct megasas_instance *instance,
scp->SCp.Status &= ~MEGASAS_LOAD_BALANCE_FLAG;
if ((raidLUN[0] == 1) &&
(local_map_ptr->raidMap.devHndlInfo[io_info.pd_after_lb].validHandles > 2)) {
(local_map_ptr->raidMap.devHndlInfo[io_info.pd_after_lb].validHandles > 1)) {
instance->dev_handle = !(instance->dev_handle);
io_info.devHandle =
local_map_ptr->raidMap.devHndlInfo[io_info.pd_after_lb].devHandle[instance->dev_handle];
@ -1598,8 +1701,7 @@ megasas_build_ldio_fusion(struct megasas_instance *instance,
cmd->request_desc->SCSIIO.RequestFlags =
(MEGASAS_REQ_DESCRIPT_FLAGS_LD_IO
<< MEGASAS_REQ_DESCRIPT_FLAGS_TYPE_SHIFT);
if ((instance->pdev->device == PCI_DEVICE_ID_LSI_INVADER) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_FURY)) {
if (fusion->adapter_type == INVADER_SERIES) {
if (io_request->RaidContext.regLockFlags ==
REGION_TYPE_UNUSED)
cmd->request_desc->SCSIIO.RequestFlags =
@ -1722,7 +1824,9 @@ megasas_build_syspd_fusion(struct megasas_instance *instance,
u16 timeout_limit;
struct MR_DRV_RAID_MAP_ALL *local_map_ptr;
struct RAID_CONTEXT *pRAID_Context;
struct MR_PD_CFG_SEQ_NUM_SYNC *pd_sync;
struct fusion_context *fusion = instance->ctrl_context;
pd_sync = (void *)fusion->pd_seq_sync[(instance->pd_seq_map_id - 1) & 1];
device_id = MEGASAS_DEV_INDEX(scmd);
pd_index = MEGASAS_PD_INDEX(scmd);
@ -1731,16 +1835,38 @@ megasas_build_syspd_fusion(struct megasas_instance *instance,
io_request = cmd->io_request;
/* get RAID_Context pointer */
pRAID_Context = &io_request->RaidContext;
pRAID_Context->regLockFlags = 0;
pRAID_Context->regLockRowLBA = 0;
pRAID_Context->regLockLength = 0;
io_request->DataLength = cpu_to_le32(scsi_bufflen(scmd));
io_request->LUN[1] = scmd->device->lun;
pRAID_Context->RAIDFlags = MR_RAID_FLAGS_IO_SUB_TYPE_SYSTEM_PD
<< MR_RAID_CTX_RAID_FLAGS_IO_SUB_TYPE_SHIFT;
pRAID_Context->VirtualDiskTgtId = cpu_to_le16(device_id);
pRAID_Context->configSeqNum = 0;
local_map_ptr = fusion->ld_drv_map[(instance->map_id & 1)];
io_request->DevHandle =
local_map_ptr->raidMap.devHndlInfo[device_id].curDevHdl;
/* If FW supports PD sequence number */
if (instance->use_seqnum_jbod_fp &&
instance->pd_list[pd_index].driveType == TYPE_DISK) {
/* TgtId must be incremented by 255 as jbod seq number is index
* below raid map
*/
pRAID_Context->VirtualDiskTgtId =
cpu_to_le16(device_id + (MAX_PHYSICAL_DEVICES - 1));
pRAID_Context->configSeqNum = pd_sync->seq[pd_index].seqNum;
io_request->DevHandle = pd_sync->seq[pd_index].devHandle;
pRAID_Context->regLockFlags |=
(MR_RL_FLAGS_SEQ_NUM_ENABLE|MR_RL_FLAGS_GRANT_DESTINATION_CUDA);
} else if (fusion->fast_path_io) {
pRAID_Context->VirtualDiskTgtId = cpu_to_le16(device_id);
pRAID_Context->configSeqNum = 0;
local_map_ptr = fusion->ld_drv_map[(instance->map_id & 1)];
io_request->DevHandle =
local_map_ptr->raidMap.devHndlInfo[device_id].curDevHdl;
} else {
/* Want to send all IO via FW path */
pRAID_Context->VirtualDiskTgtId = cpu_to_le16(device_id);
pRAID_Context->configSeqNum = 0;
io_request->DevHandle = cpu_to_le16(0xFFFF);
}
cmd->request_desc->SCSIIO.DevHandle = io_request->DevHandle;
cmd->request_desc->SCSIIO.MSIxIndex =
@ -1755,22 +1881,16 @@ megasas_build_syspd_fusion(struct megasas_instance *instance,
(MPI2_REQ_DESCRIPT_FLAGS_SCSI_IO <<
MEGASAS_REQ_DESCRIPT_FLAGS_TYPE_SHIFT);
pRAID_Context->timeoutValue = cpu_to_le16(os_timeout_value);
pRAID_Context->VirtualDiskTgtId = cpu_to_le16(device_id);
} else {
/* system pd Fast Path */
io_request->Function = MPI2_FUNCTION_SCSI_IO_REQUEST;
pRAID_Context->regLockFlags = 0;
pRAID_Context->regLockRowLBA = 0;
pRAID_Context->regLockLength = 0;
timeout_limit = (scmd->device->type == TYPE_DISK) ?
255 : 0xFFFF;
pRAID_Context->timeoutValue =
cpu_to_le16((os_timeout_value > timeout_limit) ?
timeout_limit : os_timeout_value);
if ((instance->pdev->device == PCI_DEVICE_ID_LSI_INVADER) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_FURY)) {
cmd->request_desc->SCSIIO.RequestFlags |=
(MEGASAS_REQ_DESCRIPT_FLAGS_NO_LOCK <<
MEGASAS_REQ_DESCRIPT_FLAGS_TYPE_SHIFT);
if (fusion->adapter_type == INVADER_SERIES) {
pRAID_Context->Type = MPI2_TYPE_CUDA;
pRAID_Context->nseg = 0x1;
io_request->IoFlags |=
@ -1796,7 +1916,7 @@ megasas_build_io_fusion(struct megasas_instance *instance,
struct scsi_cmnd *scp,
struct megasas_cmd_fusion *cmd)
{
u32 sge_count;
u16 sge_count;
u8 cmd_type;
struct MPI2_RAID_SCSI_IO_REQUEST *io_request = cmd->io_request;
@ -1854,7 +1974,11 @@ megasas_build_io_fusion(struct megasas_instance *instance,
return 1;
}
/* numSGE store lower 8 bit of sge_count.
* numSGEExt store higher 8 bit of sge_count
*/
io_request->RaidContext.numSGE = sge_count;
io_request->RaidContext.numSGEExt = (u8)(sge_count >> 8);
io_request->SGLFlags = cpu_to_le16(MPI2_SGE_FLAGS_64_BIT_ADDRESSING);
@ -2084,10 +2208,7 @@ complete_cmd_fusion(struct megasas_instance *instance, u32 MSIxIndex)
* pending to be completed
*/
if (threshold_reply_count >= THRESHOLD_REPLY_COUNT) {
if ((instance->pdev->device ==
PCI_DEVICE_ID_LSI_INVADER) ||
(instance->pdev->device ==
PCI_DEVICE_ID_LSI_FURY))
if (fusion->adapter_type == INVADER_SERIES)
writel(((MSIxIndex & 0x7) << 24) |
fusion->last_reply_idx[MSIxIndex],
instance->reply_post_host_index_addr[MSIxIndex/8]);
@ -2103,8 +2224,7 @@ complete_cmd_fusion(struct megasas_instance *instance, u32 MSIxIndex)
return IRQ_NONE;
wmb();
if ((instance->pdev->device == PCI_DEVICE_ID_LSI_INVADER) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_FURY))
if (fusion->adapter_type == INVADER_SERIES)
writel(((MSIxIndex & 0x7) << 24) |
fusion->last_reply_idx[MSIxIndex],
instance->reply_post_host_index_addr[MSIxIndex/8]);
@ -2227,8 +2347,7 @@ build_mpt_mfi_pass_thru(struct megasas_instance *instance,
io_req = cmd->io_request;
if ((instance->pdev->device == PCI_DEVICE_ID_LSI_INVADER) ||
(instance->pdev->device == PCI_DEVICE_ID_LSI_FURY)) {
if (fusion->adapter_type == INVADER_SERIES) {
struct MPI25_IEEE_SGE_CHAIN64 *sgl_ptr_end =
(struct MPI25_IEEE_SGE_CHAIN64 *)&io_req->SGL;
sgl_ptr_end += fusion->max_sge_in_main_msg - 1;
@ -2248,7 +2367,7 @@ build_mpt_mfi_pass_thru(struct megasas_instance *instance,
mpi25_ieee_chain->Flags = IEEE_SGE_FLAGS_CHAIN_ELEMENT |
MPI2_IEEE_SGE_FLAGS_IOCPLBNTA_ADDR;
mpi25_ieee_chain->Length = cpu_to_le32(MEGASAS_MAX_SZ_CHAIN_FRAME);
mpi25_ieee_chain->Length = cpu_to_le32(instance->max_chain_frame_sz);
return 0;
}
@ -2384,6 +2503,70 @@ static int
megasas_adp_reset_fusion(struct megasas_instance *instance,
struct megasas_register_set __iomem *regs)
{
u32 host_diag, abs_state, retry;
/* Now try to reset the chip */
writel(MPI2_WRSEQ_FLUSH_KEY_VALUE, &instance->reg_set->fusion_seq_offset);
writel(MPI2_WRSEQ_1ST_KEY_VALUE, &instance->reg_set->fusion_seq_offset);
writel(MPI2_WRSEQ_2ND_KEY_VALUE, &instance->reg_set->fusion_seq_offset);
writel(MPI2_WRSEQ_3RD_KEY_VALUE, &instance->reg_set->fusion_seq_offset);
writel(MPI2_WRSEQ_4TH_KEY_VALUE, &instance->reg_set->fusion_seq_offset);
writel(MPI2_WRSEQ_5TH_KEY_VALUE, &instance->reg_set->fusion_seq_offset);
writel(MPI2_WRSEQ_6TH_KEY_VALUE, &instance->reg_set->fusion_seq_offset);
/* Check that the diag write enable (DRWE) bit is on */
host_diag = readl(&instance->reg_set->fusion_host_diag);
retry = 0;
while (!(host_diag & HOST_DIAG_WRITE_ENABLE)) {
msleep(100);
host_diag = readl(&instance->reg_set->fusion_host_diag);
if (retry++ == 100) {
dev_warn(&instance->pdev->dev,
"Host diag unlock failed from %s %d\n",
__func__, __LINE__);
break;
}
}
if (!(host_diag & HOST_DIAG_WRITE_ENABLE))
return -1;
/* Send chip reset command */
writel(host_diag | HOST_DIAG_RESET_ADAPTER,
&instance->reg_set->fusion_host_diag);
msleep(3000);
/* Make sure reset adapter bit is cleared */
host_diag = readl(&instance->reg_set->fusion_host_diag);
retry = 0;
while (host_diag & HOST_DIAG_RESET_ADAPTER) {
msleep(100);
host_diag = readl(&instance->reg_set->fusion_host_diag);
if (retry++ == 1000) {
dev_warn(&instance->pdev->dev,
"Diag reset adapter never cleared %s %d\n",
__func__, __LINE__);
break;
}
}
if (host_diag & HOST_DIAG_RESET_ADAPTER)
return -1;
abs_state = instance->instancet->read_fw_status_reg(instance->reg_set)
& MFI_STATE_MASK;
retry = 0;
while ((abs_state <= MFI_STATE_FW_INIT) && (retry++ < 1000)) {
msleep(100);
abs_state = instance->instancet->
read_fw_status_reg(instance->reg_set) & MFI_STATE_MASK;
}
if (abs_state <= MFI_STATE_FW_INIT) {
dev_warn(&instance->pdev->dev,
"fw state < MFI_STATE_FW_INIT, state = 0x%x %s %d\n",
abs_state, __func__, __LINE__);
return -1;
}
return 0;
}
@ -2512,8 +2695,10 @@ void megasas_refire_mgmt_cmd(struct megasas_instance *instance)
continue;
req_desc = megasas_get_request_descriptor
(instance, smid - 1);
if (req_desc && (cmd_mfi->frame->dcmd.opcode !=
cpu_to_le32(MR_DCMD_LD_MAP_GET_INFO)))
if (req_desc && ((cmd_mfi->frame->dcmd.opcode !=
cpu_to_le32(MR_DCMD_LD_MAP_GET_INFO)) &&
(cmd_mfi->frame->dcmd.opcode !=
cpu_to_le32(MR_DCMD_SYSTEM_PD_MAP_GET_INFO))))
megasas_fire_cmd_fusion(instance, req_desc);
else
megasas_return_cmd(instance, cmd_mfi);
@ -2547,11 +2732,11 @@ int megasas_check_mpio_paths(struct megasas_instance *instance,
/* Core fusion reset function */
int megasas_reset_fusion(struct Scsi_Host *shost, int iotimeout)
{
int retval = SUCCESS, i, retry = 0, convert = 0;
int retval = SUCCESS, i, convert = 0;
struct megasas_instance *instance;
struct megasas_cmd_fusion *cmd_fusion;
struct fusion_context *fusion;
u32 host_diag, abs_state, status_reg, reset_adapter;
u32 abs_state, status_reg, reset_adapter;
u32 io_timeout_in_crash_mode = 0;
struct scsi_cmnd *scmd_local = NULL;
@ -2705,82 +2890,11 @@ int megasas_reset_fusion(struct Scsi_Host *shost, int iotimeout)
/* Now try to reset the chip */
for (i = 0; i < MEGASAS_FUSION_MAX_RESET_TRIES; i++) {
writel(MPI2_WRSEQ_FLUSH_KEY_VALUE,
&instance->reg_set->fusion_seq_offset);
writel(MPI2_WRSEQ_1ST_KEY_VALUE,
&instance->reg_set->fusion_seq_offset);
writel(MPI2_WRSEQ_2ND_KEY_VALUE,
&instance->reg_set->fusion_seq_offset);
writel(MPI2_WRSEQ_3RD_KEY_VALUE,
&instance->reg_set->fusion_seq_offset);
writel(MPI2_WRSEQ_4TH_KEY_VALUE,
&instance->reg_set->fusion_seq_offset);
writel(MPI2_WRSEQ_5TH_KEY_VALUE,
&instance->reg_set->fusion_seq_offset);
writel(MPI2_WRSEQ_6TH_KEY_VALUE,
&instance->reg_set->fusion_seq_offset);
/* Check that the diag write enable (DRWE) bit is on */
host_diag = readl(&instance->reg_set->fusion_host_diag);
retry = 0;
while (!(host_diag & HOST_DIAG_WRITE_ENABLE)) {
msleep(100);
host_diag =
readl(&instance->reg_set->fusion_host_diag);
if (retry++ == 100) {
dev_warn(&instance->pdev->dev,
"Host diag unlock failed! "
"for scsi%d\n",
instance->host->host_no);
break;
}
}
if (!(host_diag & HOST_DIAG_WRITE_ENABLE))
if (instance->instancet->adp_reset
(instance, instance->reg_set))
continue;
/* Send chip reset command */
writel(host_diag | HOST_DIAG_RESET_ADAPTER,
&instance->reg_set->fusion_host_diag);
msleep(3000);
/* Make sure reset adapter bit is cleared */
host_diag = readl(&instance->reg_set->fusion_host_diag);
retry = 0;
while (host_diag & HOST_DIAG_RESET_ADAPTER) {
msleep(100);
host_diag =
readl(&instance->reg_set->fusion_host_diag);
if (retry++ == 1000) {
dev_warn(&instance->pdev->dev,
"Diag reset adapter never "
"cleared for scsi%d!\n",
instance->host->host_no);
break;
}
}
if (host_diag & HOST_DIAG_RESET_ADAPTER)
continue;
abs_state =
instance->instancet->read_fw_status_reg(
instance->reg_set) & MFI_STATE_MASK;
retry = 0;
while ((abs_state <= MFI_STATE_FW_INIT) &&
(retry++ < 1000)) {
msleep(100);
abs_state =
instance->instancet->read_fw_status_reg(
instance->reg_set) & MFI_STATE_MASK;
}
if (abs_state <= MFI_STATE_FW_INIT) {
dev_warn(&instance->pdev->dev, "firmware "
"state < MFI_STATE_FW_INIT, state = "
"0x%x for scsi%d\n", abs_state,
instance->host->host_no);
continue;
}
/* Wait for FW to become ready */
if (megasas_transition_to_ready(instance, 1)) {
dev_warn(&instance->pdev->dev, "Failed to "
@ -2816,6 +2930,8 @@ int megasas_reset_fusion(struct Scsi_Host *shost, int iotimeout)
if (!megasas_get_map_info(instance))
megasas_sync_map_info(instance);
megasas_setup_jbod_map(instance);
clear_bit(MEGASAS_FUSION_IN_RESET,
&instance->reset_flags);
instance->instancet->enable_intr(instance);

View File

@ -35,8 +35,13 @@
#define _MEGARAID_SAS_FUSION_H_
/* Fusion defines */
#define MEGASAS_MAX_SZ_CHAIN_FRAME 1024
#define MEGASAS_CHAIN_FRAME_SZ_MIN 1024
#define MFI_FUSION_ENABLE_INTERRUPT_MASK (0x00000009)
#define MEGASAS_MAX_CHAIN_SHIFT 5
#define MEGASAS_MAX_CHAIN_SIZE_UNITS_MASK 0x400000
#define MEGASAS_MAX_CHAIN_SIZE_MASK 0x3E0
#define MEGASAS_256K_IO 128
#define MEGASAS_1MB_IO (MEGASAS_256K_IO * 4)
#define MEGA_MPI2_RAID_DEFAULT_IO_FRAME_SIZE 256
#define MEGASAS_MPI2_FUNCTION_PASSTHRU_IO_REQUEST 0xF0
#define MEGASAS_MPI2_FUNCTION_LD_IO_REQUEST 0xF1
@ -89,6 +94,12 @@ enum MR_RAID_FLAGS_IO_SUB_TYPE {
#define MEGASAS_FP_CMD_LEN 16
#define MEGASAS_FUSION_IN_RESET 0
#define THRESHOLD_REPLY_COUNT 50
#define JBOD_MAPS_COUNT 2
enum MR_FUSION_ADAPTER_TYPE {
THUNDERBOLT_SERIES = 0,
INVADER_SERIES = 1,
};
/*
* Raid Context structure which describes MegaRAID specific IO Parameters
@ -117,7 +128,9 @@ struct RAID_CONTEXT {
u8 numSGE;
__le16 configSeqNum;
u8 spanArm;
u8 resvd2[3];
u8 priority;
u8 numSGEExt;
u8 resvd2;
};
#define RAID_CTX_SPANARM_ARM_SHIFT (0)
@ -486,6 +499,7 @@ struct MPI2_IOC_INIT_REQUEST {
#define MAX_PHYSICAL_DEVICES 256
#define MAX_RAIDMAP_PHYSICAL_DEVICES (MAX_PHYSICAL_DEVICES)
#define MR_DCMD_LD_MAP_GET_INFO 0x0300e101
#define MR_DCMD_SYSTEM_PD_MAP_GET_INFO 0x0200e102
#define MR_DCMD_CTRL_SHARED_HOST_MEM_ALLOC 0x010e8485 /* SR-IOV HB alloc*/
#define MR_DCMD_LD_VF_MAP_GET_ALL_LDS_111 0x03200200
#define MR_DCMD_LD_VF_MAP_GET_ALL_LDS 0x03150200
@ -789,6 +803,21 @@ struct MR_FW_RAID_MAP_EXT {
struct MR_LD_SPAN_MAP ldSpanMap[MAX_LOGICAL_DRIVES_EXT];
};
/*
* * define MR_PD_CFG_SEQ structure for system PDs
* */
struct MR_PD_CFG_SEQ {
__le16 seqNum;
__le16 devHandle;
u8 reserved[4];
} __packed;
struct MR_PD_CFG_SEQ_NUM_SYNC {
__le32 size;
__le32 count;
struct MR_PD_CFG_SEQ seq[1];
} __packed;
struct fusion_context {
struct megasas_cmd_fusion **cmd_list;
dma_addr_t req_frames_desc_phys;
@ -828,9 +857,12 @@ struct fusion_context {
u32 current_map_sz;
u32 drv_map_sz;
u32 drv_map_pages;
struct MR_PD_CFG_SEQ_NUM_SYNC *pd_seq_sync[JBOD_MAPS_COUNT];
dma_addr_t pd_seq_phys[JBOD_MAPS_COUNT];
u8 fast_path_io;
struct LD_LOAD_BALANCE_INFO load_balance_info[MAX_LOGICAL_DRIVES_EXT];
LD_SPAN_INFO log_to_span[MAX_LOGICAL_DRIVES_EXT];
u8 adapter_type;
};
union desc_value {

View File

@ -1,67 +0,0 @@
#
# Kernel configuration file for the MPT2SAS
#
# This code is based on drivers/scsi/mpt2sas/Kconfig
# Copyright (C) 2007-2014 LSI Corporation
# (mailto:DL-MPTFusionLinux@lsi.com)
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# NO WARRANTY
# THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT
# LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT,
# MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is
# solely responsible for determining the appropriateness of using and
# distributing the Program and assumes all risks associated with its
# exercise of rights under this Agreement, including but not limited to
# the risks and costs of program errors, damage to or loss of data,
# programs or equipment, and unavailability or interruption of operations.
# DISCLAIMER OF LIABILITY
# NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY
# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
# TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED
# HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301,
# USA.
config SCSI_MPT2SAS
tristate "LSI MPT Fusion SAS 2.0 Device Driver"
depends on PCI && SCSI
select SCSI_SAS_ATTRS
select RAID_ATTRS
---help---
This driver supports PCI-Express SAS 6Gb/s Host Adapters.
config SCSI_MPT2SAS_MAX_SGE
int "LSI MPT Fusion Max number of SG Entries (16 - 128)"
depends on PCI && SCSI && SCSI_MPT2SAS
default "128"
range 16 128
---help---
This option allows you to specify the maximum number of scatter-
gather entries per I/O. The driver default is 128, which matches
SAFE_PHYS_SEGMENTS. However, it may decreased down to 16.
Decreasing this parameter will reduce memory requirements
on a per controller instance.
config SCSI_MPT2SAS_LOGGING
bool "LSI MPT Fusion logging facility"
depends on PCI && SCSI && SCSI_MPT2SAS
---help---
This turns on a logging facility.

View File

@ -1,7 +0,0 @@
# mpt2sas makefile
obj-$(CONFIG_SCSI_MPT2SAS) += mpt2sas.o
mpt2sas-y += mpt2sas_base.o \
mpt2sas_config.o \
mpt2sas_scsih.o \
mpt2sas_transport.o \
mpt2sas_ctl.o

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,461 +0,0 @@
/*
* Copyright (c) 2000-2014 LSI Corporation.
*
*
* Name: mpi2_init.h
* Title: MPI SCSI initiator mode messages and structures
* Creation Date: June 23, 2006
*
* mpi2_init.h Version: 02.00.15
*
* Version History
* ---------------
*
* Date Version Description
* -------- -------- ------------------------------------------------------
* 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A.
* 10-31-07 02.00.01 Fixed name for pMpi2SCSITaskManagementRequest_t.
* 12-18-07 02.00.02 Modified Task Management Target Reset Method defines.
* 02-29-08 02.00.03 Added Query Task Set and Query Unit Attention.
* 03-03-08 02.00.04 Fixed name of struct _MPI2_SCSI_TASK_MANAGE_REPLY.
* 05-21-08 02.00.05 Fixed typo in name of Mpi2SepRequest_t.
* 10-02-08 02.00.06 Removed Untagged and No Disconnect values from SCSI IO
* Control field Task Attribute flags.
* Moved LUN field defines to mpi2.h because they are
* common to many structures.
* 05-06-09 02.00.07 Changed task management type of Query Unit Attention to
* Query Asynchronous Event.
* Defined two new bits in the SlotStatus field of the SCSI
* Enclosure Processor Request and Reply.
* 10-28-09 02.00.08 Added defines for decoding the ResponseInfo bytes for
* both SCSI IO Error Reply and SCSI Task Management Reply.
* Added ResponseInfo field to MPI2_SCSI_TASK_MANAGE_REPLY.
* Added MPI2_SCSITASKMGMT_RSP_TM_OVERLAPPED_TAG define.
* 02-10-10 02.00.09 Removed unused structure that had "#if 0" around it.
* 05-12-10 02.00.10 Added optional vendor-unique region to SCSI IO Request.
* 11-10-10 02.00.11 Added MPI2_SCSIIO_NUM_SGLOFFSETS define.
* 02-06-12 02.00.13 Added alternate defines for Task Priority / Command
* Priority to match SAM-4.
* 07-10-12 02.00.14 Added MPI2_SCSIIO_CONTROL_SHIFT_DATADIRECTION.
* 04-09-13 02.00.15 Added SCSIStatusQualifier field to MPI2_SCSI_IO_REPLY,
* replacing the Reserved4 field.
* --------------------------------------------------------------------------
*/
#ifndef MPI2_INIT_H
#define MPI2_INIT_H
/*****************************************************************************
*
* SCSI Initiator Messages
*
*****************************************************************************/
/****************************************************************************
* SCSI IO messages and associated structures
****************************************************************************/
typedef struct
{
U8 CDB[20]; /* 0x00 */
U32 PrimaryReferenceTag; /* 0x14 */
U16 PrimaryApplicationTag; /* 0x18 */
U16 PrimaryApplicationTagMask; /* 0x1A */
U32 TransferLength; /* 0x1C */
} MPI2_SCSI_IO_CDB_EEDP32, MPI2_POINTER PTR_MPI2_SCSI_IO_CDB_EEDP32,
Mpi2ScsiIoCdbEedp32_t, MPI2_POINTER pMpi2ScsiIoCdbEedp32_t;
typedef union
{
U8 CDB32[32];
MPI2_SCSI_IO_CDB_EEDP32 EEDP32;
MPI2_SGE_SIMPLE_UNION SGE;
} MPI2_SCSI_IO_CDB_UNION, MPI2_POINTER PTR_MPI2_SCSI_IO_CDB_UNION,
Mpi2ScsiIoCdb_t, MPI2_POINTER pMpi2ScsiIoCdb_t;
/* SCSI IO Request Message */
typedef struct _MPI2_SCSI_IO_REQUEST
{
U16 DevHandle; /* 0x00 */
U8 ChainOffset; /* 0x02 */
U8 Function; /* 0x03 */
U16 Reserved1; /* 0x04 */
U8 Reserved2; /* 0x06 */
U8 MsgFlags; /* 0x07 */
U8 VP_ID; /* 0x08 */
U8 VF_ID; /* 0x09 */
U16 Reserved3; /* 0x0A */
U32 SenseBufferLowAddress; /* 0x0C */
U16 SGLFlags; /* 0x10 */
U8 SenseBufferLength; /* 0x12 */
U8 Reserved4; /* 0x13 */
U8 SGLOffset0; /* 0x14 */
U8 SGLOffset1; /* 0x15 */
U8 SGLOffset2; /* 0x16 */
U8 SGLOffset3; /* 0x17 */
U32 SkipCount; /* 0x18 */
U32 DataLength; /* 0x1C */
U32 BidirectionalDataLength; /* 0x20 */
U16 IoFlags; /* 0x24 */
U16 EEDPFlags; /* 0x26 */
U32 EEDPBlockSize; /* 0x28 */
U32 SecondaryReferenceTag; /* 0x2C */
U16 SecondaryApplicationTag; /* 0x30 */
U16 ApplicationTagTranslationMask; /* 0x32 */
U8 LUN[8]; /* 0x34 */
U32 Control; /* 0x3C */
MPI2_SCSI_IO_CDB_UNION CDB; /* 0x40 */
#ifdef MPI2_SCSI_IO_VENDOR_UNIQUE_REGION /* typically this is left undefined */
MPI2_SCSI_IO_VENDOR_UNIQUE VendorRegion;
#endif
MPI2_SGE_IO_UNION SGL; /* 0x60 */
} MPI2_SCSI_IO_REQUEST, MPI2_POINTER PTR_MPI2_SCSI_IO_REQUEST,
Mpi2SCSIIORequest_t, MPI2_POINTER pMpi2SCSIIORequest_t;
/* SCSI IO MsgFlags bits */
/* MsgFlags for SenseBufferAddressSpace */
#define MPI2_SCSIIO_MSGFLAGS_MASK_SENSE_ADDR (0x0C)
#define MPI2_SCSIIO_MSGFLAGS_SYSTEM_SENSE_ADDR (0x00)
#define MPI2_SCSIIO_MSGFLAGS_IOCDDR_SENSE_ADDR (0x04)
#define MPI2_SCSIIO_MSGFLAGS_IOCPLB_SENSE_ADDR (0x08)
#define MPI2_SCSIIO_MSGFLAGS_IOCPLBNTA_SENSE_ADDR (0x0C)
/* SCSI IO SGLFlags bits */
/* base values for Data Location Address Space */
#define MPI2_SCSIIO_SGLFLAGS_ADDR_MASK (0x0C)
#define MPI2_SCSIIO_SGLFLAGS_SYSTEM_ADDR (0x00)
#define MPI2_SCSIIO_SGLFLAGS_IOCDDR_ADDR (0x04)
#define MPI2_SCSIIO_SGLFLAGS_IOCPLB_ADDR (0x08)
#define MPI2_SCSIIO_SGLFLAGS_IOCPLBNTA_ADDR (0x0C)
/* base values for Type */
#define MPI2_SCSIIO_SGLFLAGS_TYPE_MASK (0x03)
#define MPI2_SCSIIO_SGLFLAGS_TYPE_MPI (0x00)
#define MPI2_SCSIIO_SGLFLAGS_TYPE_IEEE32 (0x01)
#define MPI2_SCSIIO_SGLFLAGS_TYPE_IEEE64 (0x02)
/* shift values for each sub-field */
#define MPI2_SCSIIO_SGLFLAGS_SGL3_SHIFT (12)
#define MPI2_SCSIIO_SGLFLAGS_SGL2_SHIFT (8)
#define MPI2_SCSIIO_SGLFLAGS_SGL1_SHIFT (4)
#define MPI2_SCSIIO_SGLFLAGS_SGL0_SHIFT (0)
/* number of SGLOffset fields */
#define MPI2_SCSIIO_NUM_SGLOFFSETS (4)
/* SCSI IO IoFlags bits */
/* Large CDB Address Space */
#define MPI2_SCSIIO_CDB_ADDR_MASK (0x6000)
#define MPI2_SCSIIO_CDB_ADDR_SYSTEM (0x0000)
#define MPI2_SCSIIO_CDB_ADDR_IOCDDR (0x2000)
#define MPI2_SCSIIO_CDB_ADDR_IOCPLB (0x4000)
#define MPI2_SCSIIO_CDB_ADDR_IOCPLBNTA (0x6000)
#define MPI2_SCSIIO_IOFLAGS_LARGE_CDB (0x1000)
#define MPI2_SCSIIO_IOFLAGS_BIDIRECTIONAL (0x0800)
#define MPI2_SCSIIO_IOFLAGS_MULTICAST (0x0400)
#define MPI2_SCSIIO_IOFLAGS_CMD_DETERMINES_DATA_DIR (0x0200)
#define MPI2_SCSIIO_IOFLAGS_CDBLENGTH_MASK (0x01FF)
/* SCSI IO EEDPFlags bits */
#define MPI2_SCSIIO_EEDPFLAGS_INC_PRI_REFTAG (0x8000)
#define MPI2_SCSIIO_EEDPFLAGS_INC_SEC_REFTAG (0x4000)
#define MPI2_SCSIIO_EEDPFLAGS_INC_PRI_APPTAG (0x2000)
#define MPI2_SCSIIO_EEDPFLAGS_INC_SEC_APPTAG (0x1000)
#define MPI2_SCSIIO_EEDPFLAGS_CHECK_REFTAG (0x0400)
#define MPI2_SCSIIO_EEDPFLAGS_CHECK_APPTAG (0x0200)
#define MPI2_SCSIIO_EEDPFLAGS_CHECK_GUARD (0x0100)
#define MPI2_SCSIIO_EEDPFLAGS_PASSTHRU_REFTAG (0x0008)
#define MPI2_SCSIIO_EEDPFLAGS_MASK_OP (0x0007)
#define MPI2_SCSIIO_EEDPFLAGS_NOOP_OP (0x0000)
#define MPI2_SCSIIO_EEDPFLAGS_CHECK_OP (0x0001)
#define MPI2_SCSIIO_EEDPFLAGS_STRIP_OP (0x0002)
#define MPI2_SCSIIO_EEDPFLAGS_CHECK_REMOVE_OP (0x0003)
#define MPI2_SCSIIO_EEDPFLAGS_INSERT_OP (0x0004)
#define MPI2_SCSIIO_EEDPFLAGS_REPLACE_OP (0x0006)
#define MPI2_SCSIIO_EEDPFLAGS_CHECK_REGEN_OP (0x0007)
/* SCSI IO LUN fields: use MPI2_LUN_ from mpi2.h */
/* SCSI IO Control bits */
#define MPI2_SCSIIO_CONTROL_ADDCDBLEN_MASK (0xFC000000)
#define MPI2_SCSIIO_CONTROL_ADDCDBLEN_SHIFT (26)
#define MPI2_SCSIIO_CONTROL_DATADIRECTION_MASK (0x03000000)
#define MPI2_SCSIIO_CONTROL_SHIFT_DATADIRECTION (24)
#define MPI2_SCSIIO_CONTROL_NODATATRANSFER (0x00000000)
#define MPI2_SCSIIO_CONTROL_WRITE (0x01000000)
#define MPI2_SCSIIO_CONTROL_READ (0x02000000)
#define MPI2_SCSIIO_CONTROL_BIDIRECTIONAL (0x03000000)
#define MPI2_SCSIIO_CONTROL_TASKPRI_MASK (0x00007800)
#define MPI2_SCSIIO_CONTROL_TASKPRI_SHIFT (11)
/* alternate name for the previous field; called Command Priority in SAM-4 */
#define MPI2_SCSIIO_CONTROL_CMDPRI_MASK (0x00007800)
#define MPI2_SCSIIO_CONTROL_CMDPRI_SHIFT (11)
#define MPI2_SCSIIO_CONTROL_TASKATTRIBUTE_MASK (0x00000700)
#define MPI2_SCSIIO_CONTROL_SIMPLEQ (0x00000000)
#define MPI2_SCSIIO_CONTROL_HEADOFQ (0x00000100)
#define MPI2_SCSIIO_CONTROL_ORDEREDQ (0x00000200)
#define MPI2_SCSIIO_CONTROL_ACAQ (0x00000400)
#define MPI2_SCSIIO_CONTROL_TLR_MASK (0x000000C0)
#define MPI2_SCSIIO_CONTROL_NO_TLR (0x00000000)
#define MPI2_SCSIIO_CONTROL_TLR_ON (0x00000040)
#define MPI2_SCSIIO_CONTROL_TLR_OFF (0x00000080)
/* SCSI IO Error Reply Message */
typedef struct _MPI2_SCSI_IO_REPLY
{
U16 DevHandle; /* 0x00 */
U8 MsgLength; /* 0x02 */
U8 Function; /* 0x03 */
U16 Reserved1; /* 0x04 */
U8 Reserved2; /* 0x06 */
U8 MsgFlags; /* 0x07 */
U8 VP_ID; /* 0x08 */
U8 VF_ID; /* 0x09 */
U16 Reserved3; /* 0x0A */
U8 SCSIStatus; /* 0x0C */
U8 SCSIState; /* 0x0D */
U16 IOCStatus; /* 0x0E */
U32 IOCLogInfo; /* 0x10 */
U32 TransferCount; /* 0x14 */
U32 SenseCount; /* 0x18 */
U32 ResponseInfo; /* 0x1C */
U16 TaskTag; /* 0x20 */
U16 SCSIStatusQualifier; /* 0x22 */
U32 BidirectionalTransferCount; /* 0x24 */
U32 Reserved5; /* 0x28 */
U32 Reserved6; /* 0x2C */
} MPI2_SCSI_IO_REPLY, MPI2_POINTER PTR_MPI2_SCSI_IO_REPLY,
Mpi2SCSIIOReply_t, MPI2_POINTER pMpi2SCSIIOReply_t;
/* SCSI IO Reply SCSIStatus values (SAM-4 status codes) */
#define MPI2_SCSI_STATUS_GOOD (0x00)
#define MPI2_SCSI_STATUS_CHECK_CONDITION (0x02)
#define MPI2_SCSI_STATUS_CONDITION_MET (0x04)
#define MPI2_SCSI_STATUS_BUSY (0x08)
#define MPI2_SCSI_STATUS_INTERMEDIATE (0x10)
#define MPI2_SCSI_STATUS_INTERMEDIATE_CONDMET (0x14)
#define MPI2_SCSI_STATUS_RESERVATION_CONFLICT (0x18)
#define MPI2_SCSI_STATUS_COMMAND_TERMINATED (0x22) /* obsolete */
#define MPI2_SCSI_STATUS_TASK_SET_FULL (0x28)
#define MPI2_SCSI_STATUS_ACA_ACTIVE (0x30)
#define MPI2_SCSI_STATUS_TASK_ABORTED (0x40)
/* SCSI IO Reply SCSIState flags */
#define MPI2_SCSI_STATE_RESPONSE_INFO_VALID (0x10)
#define MPI2_SCSI_STATE_TERMINATED (0x08)
#define MPI2_SCSI_STATE_NO_SCSI_STATUS (0x04)
#define MPI2_SCSI_STATE_AUTOSENSE_FAILED (0x02)
#define MPI2_SCSI_STATE_AUTOSENSE_VALID (0x01)
/* masks and shifts for the ResponseInfo field */
#define MPI2_SCSI_RI_MASK_REASONCODE (0x000000FF)
#define MPI2_SCSI_RI_SHIFT_REASONCODE (0)
#define MPI2_SCSI_TASKTAG_UNKNOWN (0xFFFF)
/****************************************************************************
* SCSI Task Management messages
****************************************************************************/
/* SCSI Task Management Request Message */
typedef struct _MPI2_SCSI_TASK_MANAGE_REQUEST
{
U16 DevHandle; /* 0x00 */
U8 ChainOffset; /* 0x02 */
U8 Function; /* 0x03 */
U8 Reserved1; /* 0x04 */
U8 TaskType; /* 0x05 */
U8 Reserved2; /* 0x06 */
U8 MsgFlags; /* 0x07 */
U8 VP_ID; /* 0x08 */
U8 VF_ID; /* 0x09 */
U16 Reserved3; /* 0x0A */
U8 LUN[8]; /* 0x0C */
U32 Reserved4[7]; /* 0x14 */
U16 TaskMID; /* 0x30 */
U16 Reserved5; /* 0x32 */
} MPI2_SCSI_TASK_MANAGE_REQUEST,
MPI2_POINTER PTR_MPI2_SCSI_TASK_MANAGE_REQUEST,
Mpi2SCSITaskManagementRequest_t,
MPI2_POINTER pMpi2SCSITaskManagementRequest_t;
/* TaskType values */
#define MPI2_SCSITASKMGMT_TASKTYPE_ABORT_TASK (0x01)
#define MPI2_SCSITASKMGMT_TASKTYPE_ABRT_TASK_SET (0x02)
#define MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET (0x03)
#define MPI2_SCSITASKMGMT_TASKTYPE_LOGICAL_UNIT_RESET (0x05)
#define MPI2_SCSITASKMGMT_TASKTYPE_CLEAR_TASK_SET (0x06)
#define MPI2_SCSITASKMGMT_TASKTYPE_QUERY_TASK (0x07)
#define MPI2_SCSITASKMGMT_TASKTYPE_CLR_ACA (0x08)
#define MPI2_SCSITASKMGMT_TASKTYPE_QRY_TASK_SET (0x09)
#define MPI2_SCSITASKMGMT_TASKTYPE_QRY_ASYNC_EVENT (0x0A)
/* obsolete TaskType name */
#define MPI2_SCSITASKMGMT_TASKTYPE_QRY_UNIT_ATTENTION \
(MPI2_SCSITASKMGMT_TASKTYPE_QRY_ASYNC_EVENT)
/* MsgFlags bits */
#define MPI2_SCSITASKMGMT_MSGFLAGS_MASK_TARGET_RESET (0x18)
#define MPI2_SCSITASKMGMT_MSGFLAGS_LINK_RESET (0x00)
#define MPI2_SCSITASKMGMT_MSGFLAGS_NEXUS_RESET_SRST (0x08)
#define MPI2_SCSITASKMGMT_MSGFLAGS_SAS_HARD_LINK_RESET (0x10)
#define MPI2_SCSITASKMGMT_MSGFLAGS_DO_NOT_SEND_TASK_IU (0x01)
/* SCSI Task Management Reply Message */
typedef struct _MPI2_SCSI_TASK_MANAGE_REPLY
{
U16 DevHandle; /* 0x00 */
U8 MsgLength; /* 0x02 */
U8 Function; /* 0x03 */
U8 ResponseCode; /* 0x04 */
U8 TaskType; /* 0x05 */
U8 Reserved1; /* 0x06 */
U8 MsgFlags; /* 0x07 */
U8 VP_ID; /* 0x08 */
U8 VF_ID; /* 0x09 */
U16 Reserved2; /* 0x0A */
U16 Reserved3; /* 0x0C */
U16 IOCStatus; /* 0x0E */
U32 IOCLogInfo; /* 0x10 */
U32 TerminationCount; /* 0x14 */
U32 ResponseInfo; /* 0x18 */
} MPI2_SCSI_TASK_MANAGE_REPLY,
MPI2_POINTER PTR_MPI2_SCSI_TASK_MANAGE_REPLY,
Mpi2SCSITaskManagementReply_t, MPI2_POINTER pMpi2SCSIManagementReply_t;
/* ResponseCode values */
#define MPI2_SCSITASKMGMT_RSP_TM_COMPLETE (0x00)
#define MPI2_SCSITASKMGMT_RSP_INVALID_FRAME (0x02)
#define MPI2_SCSITASKMGMT_RSP_TM_NOT_SUPPORTED (0x04)
#define MPI2_SCSITASKMGMT_RSP_TM_FAILED (0x05)
#define MPI2_SCSITASKMGMT_RSP_TM_SUCCEEDED (0x08)
#define MPI2_SCSITASKMGMT_RSP_TM_INVALID_LUN (0x09)
#define MPI2_SCSITASKMGMT_RSP_TM_OVERLAPPED_TAG (0x0A)
#define MPI2_SCSITASKMGMT_RSP_IO_QUEUED_ON_IOC (0x80)
/* masks and shifts for the ResponseInfo field */
#define MPI2_SCSITASKMGMT_RI_MASK_REASONCODE (0x000000FF)
#define MPI2_SCSITASKMGMT_RI_SHIFT_REASONCODE (0)
#define MPI2_SCSITASKMGMT_RI_MASK_ARI2 (0x0000FF00)
#define MPI2_SCSITASKMGMT_RI_SHIFT_ARI2 (8)
#define MPI2_SCSITASKMGMT_RI_MASK_ARI1 (0x00FF0000)
#define MPI2_SCSITASKMGMT_RI_SHIFT_ARI1 (16)
#define MPI2_SCSITASKMGMT_RI_MASK_ARI0 (0xFF000000)
#define MPI2_SCSITASKMGMT_RI_SHIFT_ARI0 (24)
/****************************************************************************
* SCSI Enclosure Processor messages
****************************************************************************/
/* SCSI Enclosure Processor Request Message */
typedef struct _MPI2_SEP_REQUEST
{
U16 DevHandle; /* 0x00 */
U8 ChainOffset; /* 0x02 */
U8 Function; /* 0x03 */
U8 Action; /* 0x04 */
U8 Flags; /* 0x05 */
U8 Reserved1; /* 0x06 */
U8 MsgFlags; /* 0x07 */
U8 VP_ID; /* 0x08 */
U8 VF_ID; /* 0x09 */
U16 Reserved2; /* 0x0A */
U32 SlotStatus; /* 0x0C */
U32 Reserved3; /* 0x10 */
U32 Reserved4; /* 0x14 */
U32 Reserved5; /* 0x18 */
U16 Slot; /* 0x1C */
U16 EnclosureHandle; /* 0x1E */
} MPI2_SEP_REQUEST, MPI2_POINTER PTR_MPI2_SEP_REQUEST,
Mpi2SepRequest_t, MPI2_POINTER pMpi2SepRequest_t;
/* Action defines */
#define MPI2_SEP_REQ_ACTION_WRITE_STATUS (0x00)
#define MPI2_SEP_REQ_ACTION_READ_STATUS (0x01)
/* Flags defines */
#define MPI2_SEP_REQ_FLAGS_DEVHANDLE_ADDRESS (0x00)
#define MPI2_SEP_REQ_FLAGS_ENCLOSURE_SLOT_ADDRESS (0x01)
/* SlotStatus defines */
#define MPI2_SEP_REQ_SLOTSTATUS_REQUEST_REMOVE (0x00040000)
#define MPI2_SEP_REQ_SLOTSTATUS_IDENTIFY_REQUEST (0x00020000)
#define MPI2_SEP_REQ_SLOTSTATUS_REBUILD_STOPPED (0x00000200)
#define MPI2_SEP_REQ_SLOTSTATUS_HOT_SPARE (0x00000100)
#define MPI2_SEP_REQ_SLOTSTATUS_UNCONFIGURED (0x00000080)
#define MPI2_SEP_REQ_SLOTSTATUS_PREDICTED_FAULT (0x00000040)
#define MPI2_SEP_REQ_SLOTSTATUS_IN_CRITICAL_ARRAY (0x00000010)
#define MPI2_SEP_REQ_SLOTSTATUS_IN_FAILED_ARRAY (0x00000008)
#define MPI2_SEP_REQ_SLOTSTATUS_DEV_REBUILDING (0x00000004)
#define MPI2_SEP_REQ_SLOTSTATUS_DEV_FAULTY (0x00000002)
#define MPI2_SEP_REQ_SLOTSTATUS_NO_ERROR (0x00000001)
/* SCSI Enclosure Processor Reply Message */
typedef struct _MPI2_SEP_REPLY
{
U16 DevHandle; /* 0x00 */
U8 MsgLength; /* 0x02 */
U8 Function; /* 0x03 */
U8 Action; /* 0x04 */
U8 Flags; /* 0x05 */
U8 Reserved1; /* 0x06 */
U8 MsgFlags; /* 0x07 */
U8 VP_ID; /* 0x08 */
U8 VF_ID; /* 0x09 */
U16 Reserved2; /* 0x0A */
U16 Reserved3; /* 0x0C */
U16 IOCStatus; /* 0x0E */
U32 IOCLogInfo; /* 0x10 */
U32 SlotStatus; /* 0x14 */
U32 Reserved4; /* 0x18 */
U16 Slot; /* 0x1C */
U16 EnclosureHandle; /* 0x1E */
} MPI2_SEP_REPLY, MPI2_POINTER PTR_MPI2_SEP_REPLY,
Mpi2SepReply_t, MPI2_POINTER pMpi2SepReply_t;
/* SlotStatus defines */
#define MPI2_SEP_REPLY_SLOTSTATUS_REMOVE_READY (0x00040000)
#define MPI2_SEP_REPLY_SLOTSTATUS_IDENTIFY_REQUEST (0x00020000)
#define MPI2_SEP_REPLY_SLOTSTATUS_REBUILD_STOPPED (0x00000200)
#define MPI2_SEP_REPLY_SLOTSTATUS_HOT_SPARE (0x00000100)
#define MPI2_SEP_REPLY_SLOTSTATUS_UNCONFIGURED (0x00000080)
#define MPI2_SEP_REPLY_SLOTSTATUS_PREDICTED_FAULT (0x00000040)
#define MPI2_SEP_REPLY_SLOTSTATUS_IN_CRITICAL_ARRAY (0x00000010)
#define MPI2_SEP_REPLY_SLOTSTATUS_IN_FAILED_ARRAY (0x00000008)
#define MPI2_SEP_REPLY_SLOTSTATUS_DEV_REBUILDING (0x00000004)
#define MPI2_SEP_REPLY_SLOTSTATUS_DEV_FAULTY (0x00000002)
#define MPI2_SEP_REPLY_SLOTSTATUS_NO_ERROR (0x00000001)
#endif

File diff suppressed because it is too large Load Diff

View File

@ -1,366 +0,0 @@
/*
* Copyright (c) 2000-2014 LSI Corporation.
*
*
* Name: mpi2_raid.h
* Title: MPI Integrated RAID messages and structures
* Creation Date: April 26, 2007
*
* mpi2_raid.h Version: 02.00.10
*
* Version History
* ---------------
*
* Date Version Description
* -------- -------- ------------------------------------------------------
* 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A.
* 08-31-07 02.00.01 Modifications to RAID Action request and reply,
* including the Actions and ActionData.
* 02-29-08 02.00.02 Added MPI2_RAID_ACTION_ADATA_DISABL_FULL_REBUILD.
* 05-21-08 02.00.03 Added MPI2_RAID_VOL_CREATION_NUM_PHYSDISKS so that
* the PhysDisk array in MPI2_RAID_VOLUME_CREATION_STRUCT
* can be sized by the build environment.
* 07-30-09 02.00.04 Added proper define for the Use Default Settings bit of
* VolumeCreationFlags and marked the old one as obsolete.
* 05-12-10 02.00.05 Added MPI2_RAID_VOL_FLAGS_OP_MDC define.
* 08-24-10 02.00.06 Added MPI2_RAID_ACTION_COMPATIBILITY_CHECK along with
* related structures and defines.
* Added product-specific range to RAID Action values.
* 02-06-12 02.00.08 Added MPI2_RAID_ACTION_PHYSDISK_HIDDEN.
* 07-26-12 02.00.09 Added ElapsedSeconds field to MPI2_RAID_VOL_INDICATOR.
* Added MPI2_RAID_VOL_FLAGS_ELAPSED_SECONDS_VALID define.
* 04-17-13 02.00.10 Added MPI25_RAID_ACTION_ADATA_ALLOW_PI.
* --------------------------------------------------------------------------
*/
#ifndef MPI2_RAID_H
#define MPI2_RAID_H
/*****************************************************************************
*
* Integrated RAID Messages
*
*****************************************************************************/
/****************************************************************************
* RAID Action messages
****************************************************************************/
/* ActionDataWord defines for use with MPI2_RAID_ACTION_CREATE_VOLUME action */
#define MPI25_RAID_ACTION_ADATA_ALLOW_PI (0x80000000)
/* ActionDataWord defines for use with MPI2_RAID_ACTION_DELETE_VOLUME action */
#define MPI2_RAID_ACTION_ADATA_KEEP_LBA0 (0x00000000)
#define MPI2_RAID_ACTION_ADATA_ZERO_LBA0 (0x00000001)
/* use MPI2_RAIDVOL0_SETTING_ defines from mpi2_cnfg.h for MPI2_RAID_ACTION_CHANGE_VOL_WRITE_CACHE action */
/* ActionDataWord defines for use with MPI2_RAID_ACTION_DISABLE_ALL_VOLUMES action */
#define MPI2_RAID_ACTION_ADATA_DISABL_FULL_REBUILD (0x00000001)
/* ActionDataWord for MPI2_RAID_ACTION_SET_RAID_FUNCTION_RATE Action */
typedef struct _MPI2_RAID_ACTION_RATE_DATA
{
U8 RateToChange; /* 0x00 */
U8 RateOrMode; /* 0x01 */
U16 DataScrubDuration; /* 0x02 */
} MPI2_RAID_ACTION_RATE_DATA, MPI2_POINTER PTR_MPI2_RAID_ACTION_RATE_DATA,
Mpi2RaidActionRateData_t, MPI2_POINTER pMpi2RaidActionRateData_t;
#define MPI2_RAID_ACTION_SET_RATE_RESYNC (0x00)
#define MPI2_RAID_ACTION_SET_RATE_DATA_SCRUB (0x01)
#define MPI2_RAID_ACTION_SET_RATE_POWERSAVE_MODE (0x02)
/* ActionDataWord for MPI2_RAID_ACTION_START_RAID_FUNCTION Action */
typedef struct _MPI2_RAID_ACTION_START_RAID_FUNCTION
{
U8 RAIDFunction; /* 0x00 */
U8 Flags; /* 0x01 */
U16 Reserved1; /* 0x02 */
} MPI2_RAID_ACTION_START_RAID_FUNCTION,
MPI2_POINTER PTR_MPI2_RAID_ACTION_START_RAID_FUNCTION,
Mpi2RaidActionStartRaidFunction_t,
MPI2_POINTER pMpi2RaidActionStartRaidFunction_t;
/* defines for the RAIDFunction field */
#define MPI2_RAID_ACTION_START_BACKGROUND_INIT (0x00)
#define MPI2_RAID_ACTION_START_ONLINE_CAP_EXPANSION (0x01)
#define MPI2_RAID_ACTION_START_CONSISTENCY_CHECK (0x02)
/* defines for the Flags field */
#define MPI2_RAID_ACTION_START_NEW (0x00)
#define MPI2_RAID_ACTION_START_RESUME (0x01)
/* ActionDataWord for MPI2_RAID_ACTION_STOP_RAID_FUNCTION Action */
typedef struct _MPI2_RAID_ACTION_STOP_RAID_FUNCTION
{
U8 RAIDFunction; /* 0x00 */
U8 Flags; /* 0x01 */
U16 Reserved1; /* 0x02 */
} MPI2_RAID_ACTION_STOP_RAID_FUNCTION,
MPI2_POINTER PTR_MPI2_RAID_ACTION_STOP_RAID_FUNCTION,
Mpi2RaidActionStopRaidFunction_t,
MPI2_POINTER pMpi2RaidActionStopRaidFunction_t;
/* defines for the RAIDFunction field */
#define MPI2_RAID_ACTION_STOP_BACKGROUND_INIT (0x00)
#define MPI2_RAID_ACTION_STOP_ONLINE_CAP_EXPANSION (0x01)
#define MPI2_RAID_ACTION_STOP_CONSISTENCY_CHECK (0x02)
/* defines for the Flags field */
#define MPI2_RAID_ACTION_STOP_ABORT (0x00)
#define MPI2_RAID_ACTION_STOP_PAUSE (0x01)
/* ActionDataWord for MPI2_RAID_ACTION_CREATE_HOT_SPARE Action */
typedef struct _MPI2_RAID_ACTION_HOT_SPARE
{
U8 HotSparePool; /* 0x00 */
U8 Reserved1; /* 0x01 */
U16 DevHandle; /* 0x02 */
} MPI2_RAID_ACTION_HOT_SPARE, MPI2_POINTER PTR_MPI2_RAID_ACTION_HOT_SPARE,
Mpi2RaidActionHotSpare_t, MPI2_POINTER pMpi2RaidActionHotSpare_t;
/* ActionDataWord for MPI2_RAID_ACTION_DEVICE_FW_UPDATE_MODE Action */
typedef struct _MPI2_RAID_ACTION_FW_UPDATE_MODE
{
U8 Flags; /* 0x00 */
U8 DeviceFirmwareUpdateModeTimeout; /* 0x01 */
U16 Reserved1; /* 0x02 */
} MPI2_RAID_ACTION_FW_UPDATE_MODE,
MPI2_POINTER PTR_MPI2_RAID_ACTION_FW_UPDATE_MODE,
Mpi2RaidActionFwUpdateMode_t, MPI2_POINTER pMpi2RaidActionFwUpdateMode_t;
/* ActionDataWord defines for use with MPI2_RAID_ACTION_DEVICE_FW_UPDATE_MODE action */
#define MPI2_RAID_ACTION_ADATA_DISABLE_FW_UPDATE (0x00)
#define MPI2_RAID_ACTION_ADATA_ENABLE_FW_UPDATE (0x01)
typedef union _MPI2_RAID_ACTION_DATA
{
U32 Word;
MPI2_RAID_ACTION_RATE_DATA Rates;
MPI2_RAID_ACTION_START_RAID_FUNCTION StartRaidFunction;
MPI2_RAID_ACTION_STOP_RAID_FUNCTION StopRaidFunction;
MPI2_RAID_ACTION_HOT_SPARE HotSpare;
MPI2_RAID_ACTION_FW_UPDATE_MODE FwUpdateMode;
} MPI2_RAID_ACTION_DATA, MPI2_POINTER PTR_MPI2_RAID_ACTION_DATA,
Mpi2RaidActionData_t, MPI2_POINTER pMpi2RaidActionData_t;
/* RAID Action Request Message */
typedef struct _MPI2_RAID_ACTION_REQUEST
{
U8 Action; /* 0x00 */
U8 Reserved1; /* 0x01 */
U8 ChainOffset; /* 0x02 */
U8 Function; /* 0x03 */
U16 VolDevHandle; /* 0x04 */
U8 PhysDiskNum; /* 0x06 */
U8 MsgFlags; /* 0x07 */
U8 VP_ID; /* 0x08 */
U8 VF_ID; /* 0x09 */
U16 Reserved2; /* 0x0A */
U32 Reserved3; /* 0x0C */
MPI2_RAID_ACTION_DATA ActionDataWord; /* 0x10 */
MPI2_SGE_SIMPLE_UNION ActionDataSGE; /* 0x14 */
} MPI2_RAID_ACTION_REQUEST, MPI2_POINTER PTR_MPI2_RAID_ACTION_REQUEST,
Mpi2RaidActionRequest_t, MPI2_POINTER pMpi2RaidActionRequest_t;
/* RAID Action request Action values */
#define MPI2_RAID_ACTION_INDICATOR_STRUCT (0x01)
#define MPI2_RAID_ACTION_CREATE_VOLUME (0x02)
#define MPI2_RAID_ACTION_DELETE_VOLUME (0x03)
#define MPI2_RAID_ACTION_DISABLE_ALL_VOLUMES (0x04)
#define MPI2_RAID_ACTION_ENABLE_ALL_VOLUMES (0x05)
#define MPI2_RAID_ACTION_PHYSDISK_OFFLINE (0x0A)
#define MPI2_RAID_ACTION_PHYSDISK_ONLINE (0x0B)
#define MPI2_RAID_ACTION_FAIL_PHYSDISK (0x0F)
#define MPI2_RAID_ACTION_ACTIVATE_VOLUME (0x11)
#define MPI2_RAID_ACTION_DEVICE_FW_UPDATE_MODE (0x15)
#define MPI2_RAID_ACTION_CHANGE_VOL_WRITE_CACHE (0x17)
#define MPI2_RAID_ACTION_SET_VOLUME_NAME (0x18)
#define MPI2_RAID_ACTION_SET_RAID_FUNCTION_RATE (0x19)
#define MPI2_RAID_ACTION_ENABLE_FAILED_VOLUME (0x1C)
#define MPI2_RAID_ACTION_CREATE_HOT_SPARE (0x1D)
#define MPI2_RAID_ACTION_DELETE_HOT_SPARE (0x1E)
#define MPI2_RAID_ACTION_SYSTEM_SHUTDOWN_INITIATED (0x20)
#define MPI2_RAID_ACTION_START_RAID_FUNCTION (0x21)
#define MPI2_RAID_ACTION_STOP_RAID_FUNCTION (0x22)
#define MPI2_RAID_ACTION_COMPATIBILITY_CHECK (0x23)
#define MPI2_RAID_ACTION_PHYSDISK_HIDDEN (0x24)
#define MPI2_RAID_ACTION_MIN_PRODUCT_SPECIFIC (0x80)
#define MPI2_RAID_ACTION_MAX_PRODUCT_SPECIFIC (0xFF)
/* RAID Volume Creation Structure */
/*
* The following define can be customized for the targeted product.
*/
#ifndef MPI2_RAID_VOL_CREATION_NUM_PHYSDISKS
#define MPI2_RAID_VOL_CREATION_NUM_PHYSDISKS (1)
#endif
typedef struct _MPI2_RAID_VOLUME_PHYSDISK
{
U8 RAIDSetNum; /* 0x00 */
U8 PhysDiskMap; /* 0x01 */
U16 PhysDiskDevHandle; /* 0x02 */
} MPI2_RAID_VOLUME_PHYSDISK, MPI2_POINTER PTR_MPI2_RAID_VOLUME_PHYSDISK,
Mpi2RaidVolumePhysDisk_t, MPI2_POINTER pMpi2RaidVolumePhysDisk_t;
/* defines for the PhysDiskMap field */
#define MPI2_RAIDACTION_PHYSDISK_PRIMARY (0x01)
#define MPI2_RAIDACTION_PHYSDISK_SECONDARY (0x02)
typedef struct _MPI2_RAID_VOLUME_CREATION_STRUCT
{
U8 NumPhysDisks; /* 0x00 */
U8 VolumeType; /* 0x01 */
U16 Reserved1; /* 0x02 */
U32 VolumeCreationFlags; /* 0x04 */
U32 VolumeSettings; /* 0x08 */
U8 Reserved2; /* 0x0C */
U8 ResyncRate; /* 0x0D */
U16 DataScrubDuration; /* 0x0E */
U64 VolumeMaxLBA; /* 0x10 */
U32 StripeSize; /* 0x18 */
U8 Name[16]; /* 0x1C */
MPI2_RAID_VOLUME_PHYSDISK PhysDisk[MPI2_RAID_VOL_CREATION_NUM_PHYSDISKS];/* 0x2C */
} MPI2_RAID_VOLUME_CREATION_STRUCT,
MPI2_POINTER PTR_MPI2_RAID_VOLUME_CREATION_STRUCT,
Mpi2RaidVolumeCreationStruct_t, MPI2_POINTER pMpi2RaidVolumeCreationStruct_t;
/* use MPI2_RAID_VOL_TYPE_ defines from mpi2_cnfg.h for VolumeType */
/* defines for the VolumeCreationFlags field */
#define MPI2_RAID_VOL_CREATION_DEFAULT_SETTINGS (0x80000000)
#define MPI2_RAID_VOL_CREATION_BACKGROUND_INIT (0x00000004)
#define MPI2_RAID_VOL_CREATION_LOW_LEVEL_INIT (0x00000002)
#define MPI2_RAID_VOL_CREATION_MIGRATE_DATA (0x00000001)
/* The following is an obsolete define.
* It must be shifted left 24 bits in order to set the proper bit.
*/
#define MPI2_RAID_VOL_CREATION_USE_DEFAULT_SETTINGS (0x80)
/* RAID Online Capacity Expansion Structure */
typedef struct _MPI2_RAID_ONLINE_CAPACITY_EXPANSION
{
U32 Flags; /* 0x00 */
U16 DevHandle0; /* 0x04 */
U16 Reserved1; /* 0x06 */
U16 DevHandle1; /* 0x08 */
U16 Reserved2; /* 0x0A */
} MPI2_RAID_ONLINE_CAPACITY_EXPANSION,
MPI2_POINTER PTR_MPI2_RAID_ONLINE_CAPACITY_EXPANSION,
Mpi2RaidOnlineCapacityExpansion_t,
MPI2_POINTER pMpi2RaidOnlineCapacityExpansion_t;
/* RAID Compatibility Input Structure */
typedef struct _MPI2_RAID_COMPATIBILITY_INPUT_STRUCT {
U16 SourceDevHandle; /* 0x00 */
U16 CandidateDevHandle; /* 0x02 */
U32 Flags; /* 0x04 */
U32 Reserved1; /* 0x08 */
U32 Reserved2; /* 0x0C */
} MPI2_RAID_COMPATIBILITY_INPUT_STRUCT,
MPI2_POINTER PTR_MPI2_RAID_COMPATIBILITY_INPUT_STRUCT,
Mpi2RaidCompatibilityInputStruct_t,
MPI2_POINTER pMpi2RaidCompatibilityInputStruct_t;
/* defines for RAID Compatibility Structure Flags field */
#define MPI2_RAID_COMPAT_SOURCE_IS_VOLUME_FLAG (0x00000002)
#define MPI2_RAID_COMPAT_REPORT_SOURCE_INFO_FLAG (0x00000001)
/* RAID Volume Indicator Structure */
typedef struct _MPI2_RAID_VOL_INDICATOR
{
U64 TotalBlocks; /* 0x00 */
U64 BlocksRemaining; /* 0x08 */
U32 Flags; /* 0x10 */
U32 ElapsedSeconds; /* 0x14 */
} MPI2_RAID_VOL_INDICATOR, MPI2_POINTER PTR_MPI2_RAID_VOL_INDICATOR,
Mpi2RaidVolIndicator_t, MPI2_POINTER pMpi2RaidVolIndicator_t;
/* defines for RAID Volume Indicator Flags field */
#define MPI2_RAID_VOL_FLAGS_ELAPSED_SECONDS_VALID (0x80000000)
#define MPI2_RAID_VOL_FLAGS_OP_MASK (0x0000000F)
#define MPI2_RAID_VOL_FLAGS_OP_BACKGROUND_INIT (0x00000000)
#define MPI2_RAID_VOL_FLAGS_OP_ONLINE_CAP_EXPANSION (0x00000001)
#define MPI2_RAID_VOL_FLAGS_OP_CONSISTENCY_CHECK (0x00000002)
#define MPI2_RAID_VOL_FLAGS_OP_RESYNC (0x00000003)
#define MPI2_RAID_VOL_FLAGS_OP_MDC (0x00000004)
/* RAID Compatibility Result Structure */
typedef struct _MPI2_RAID_COMPATIBILITY_RESULT_STRUCT {
U8 State; /* 0x00 */
U8 Reserved1; /* 0x01 */
U16 Reserved2; /* 0x02 */
U32 GenericAttributes; /* 0x04 */
U32 OEMSpecificAttributes; /* 0x08 */
U32 Reserved3; /* 0x0C */
U32 Reserved4; /* 0x10 */
} MPI2_RAID_COMPATIBILITY_RESULT_STRUCT,
MPI2_POINTER PTR_MPI2_RAID_COMPATIBILITY_RESULT_STRUCT,
Mpi2RaidCompatibilityResultStruct_t,
MPI2_POINTER pMpi2RaidCompatibilityResultStruct_t;
/* defines for RAID Compatibility Result Structure State field */
#define MPI2_RAID_COMPAT_STATE_COMPATIBLE (0x00)
#define MPI2_RAID_COMPAT_STATE_NOT_COMPATIBLE (0x01)
/* defines for RAID Compatibility Result Structure GenericAttributes field */
#define MPI2_RAID_COMPAT_GENATTRIB_4K_SECTOR (0x00000010)
#define MPI2_RAID_COMPAT_GENATTRIB_MEDIA_MASK (0x0000000C)
#define MPI2_RAID_COMPAT_GENATTRIB_SOLID_STATE_DRIVE (0x00000008)
#define MPI2_RAID_COMPAT_GENATTRIB_HARD_DISK_DRIVE (0x00000004)
#define MPI2_RAID_COMPAT_GENATTRIB_PROTOCOL_MASK (0x00000003)
#define MPI2_RAID_COMPAT_GENATTRIB_SAS_PROTOCOL (0x00000002)
#define MPI2_RAID_COMPAT_GENATTRIB_SATA_PROTOCOL (0x00000001)
/* RAID Action Reply ActionData union */
typedef union _MPI2_RAID_ACTION_REPLY_DATA
{
U32 Word[6];
MPI2_RAID_VOL_INDICATOR RaidVolumeIndicator;
U16 VolDevHandle;
U8 VolumeState;
U8 PhysDiskNum;
MPI2_RAID_COMPATIBILITY_RESULT_STRUCT RaidCompatibilityResult;
} MPI2_RAID_ACTION_REPLY_DATA, MPI2_POINTER PTR_MPI2_RAID_ACTION_REPLY_DATA,
Mpi2RaidActionReplyData_t, MPI2_POINTER pMpi2RaidActionReplyData_t;
/* use MPI2_RAIDVOL0_SETTING_ defines from mpi2_cnfg.h for MPI2_RAID_ACTION_CHANGE_VOL_WRITE_CACHE action */
/* RAID Action Reply Message */
typedef struct _MPI2_RAID_ACTION_REPLY
{
U8 Action; /* 0x00 */
U8 Reserved1; /* 0x01 */
U8 MsgLength; /* 0x02 */
U8 Function; /* 0x03 */
U16 VolDevHandle; /* 0x04 */
U8 PhysDiskNum; /* 0x06 */
U8 MsgFlags; /* 0x07 */
U8 VP_ID; /* 0x08 */
U8 VF_ID; /* 0x09 */
U16 Reserved2; /* 0x0A */
U16 Reserved3; /* 0x0C */
U16 IOCStatus; /* 0x0E */
U32 IOCLogInfo; /* 0x10 */
MPI2_RAID_ACTION_REPLY_DATA ActionData; /* 0x14 */
} MPI2_RAID_ACTION_REPLY, MPI2_POINTER PTR_MPI2_RAID_ACTION_REPLY,
Mpi2RaidActionReply_t, MPI2_POINTER pMpi2RaidActionReply_t;
#endif

View File

@ -1,288 +0,0 @@
/*
* Copyright (c) 2000-2014 LSI Corporation.
*
*
* Name: mpi2_sas.h
* Title: MPI Serial Attached SCSI structures and definitions
* Creation Date: February 9, 2007
*
* mpi2_sas.h Version: 02.00.05
*
* Version History
* ---------------
*
* Date Version Description
* -------- -------- ------------------------------------------------------
* 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A.
* 06-26-07 02.00.01 Added Clear All Persistent Operation to SAS IO Unit
* Control Request.
* 10-02-08 02.00.02 Added Set IOC Parameter Operation to SAS IO Unit Control
* Request.
* 10-28-09 02.00.03 Changed the type of SGL in MPI2_SATA_PASSTHROUGH_REQUEST
* to MPI2_SGE_IO_UNION since it supports chained SGLs.
* 05-12-10 02.00.04 Modified some comments.
* 08-11-10 02.00.05 Added NCQ operations to SAS IO Unit Control.
* --------------------------------------------------------------------------
*/
#ifndef MPI2_SAS_H
#define MPI2_SAS_H
/*
* Values for SASStatus.
*/
#define MPI2_SASSTATUS_SUCCESS (0x00)
#define MPI2_SASSTATUS_UNKNOWN_ERROR (0x01)
#define MPI2_SASSTATUS_INVALID_FRAME (0x02)
#define MPI2_SASSTATUS_UTC_BAD_DEST (0x03)
#define MPI2_SASSTATUS_UTC_BREAK_RECEIVED (0x04)
#define MPI2_SASSTATUS_UTC_CONNECT_RATE_NOT_SUPPORTED (0x05)
#define MPI2_SASSTATUS_UTC_PORT_LAYER_REQUEST (0x06)
#define MPI2_SASSTATUS_UTC_PROTOCOL_NOT_SUPPORTED (0x07)
#define MPI2_SASSTATUS_UTC_STP_RESOURCES_BUSY (0x08)
#define MPI2_SASSTATUS_UTC_WRONG_DESTINATION (0x09)
#define MPI2_SASSTATUS_SHORT_INFORMATION_UNIT (0x0A)
#define MPI2_SASSTATUS_LONG_INFORMATION_UNIT (0x0B)
#define MPI2_SASSTATUS_XFER_RDY_INCORRECT_WRITE_DATA (0x0C)
#define MPI2_SASSTATUS_XFER_RDY_REQUEST_OFFSET_ERROR (0x0D)
#define MPI2_SASSTATUS_XFER_RDY_NOT_EXPECTED (0x0E)
#define MPI2_SASSTATUS_DATA_INCORRECT_DATA_LENGTH (0x0F)
#define MPI2_SASSTATUS_DATA_TOO_MUCH_READ_DATA (0x10)
#define MPI2_SASSTATUS_DATA_OFFSET_ERROR (0x11)
#define MPI2_SASSTATUS_SDSF_NAK_RECEIVED (0x12)
#define MPI2_SASSTATUS_SDSF_CONNECTION_FAILED (0x13)
#define MPI2_SASSTATUS_INITIATOR_RESPONSE_TIMEOUT (0x14)
/*
* Values for the SAS DeviceInfo field used in SAS Device Status Change Event
* data and SAS Configuration pages.
*/
#define MPI2_SAS_DEVICE_INFO_SEP (0x00004000)
#define MPI2_SAS_DEVICE_INFO_ATAPI_DEVICE (0x00002000)
#define MPI2_SAS_DEVICE_INFO_LSI_DEVICE (0x00001000)
#define MPI2_SAS_DEVICE_INFO_DIRECT_ATTACH (0x00000800)
#define MPI2_SAS_DEVICE_INFO_SSP_TARGET (0x00000400)
#define MPI2_SAS_DEVICE_INFO_STP_TARGET (0x00000200)
#define MPI2_SAS_DEVICE_INFO_SMP_TARGET (0x00000100)
#define MPI2_SAS_DEVICE_INFO_SATA_DEVICE (0x00000080)
#define MPI2_SAS_DEVICE_INFO_SSP_INITIATOR (0x00000040)
#define MPI2_SAS_DEVICE_INFO_STP_INITIATOR (0x00000020)
#define MPI2_SAS_DEVICE_INFO_SMP_INITIATOR (0x00000010)
#define MPI2_SAS_DEVICE_INFO_SATA_HOST (0x00000008)
#define MPI2_SAS_DEVICE_INFO_MASK_DEVICE_TYPE (0x00000007)
#define MPI2_SAS_DEVICE_INFO_NO_DEVICE (0x00000000)
#define MPI2_SAS_DEVICE_INFO_END_DEVICE (0x00000001)
#define MPI2_SAS_DEVICE_INFO_EDGE_EXPANDER (0x00000002)
#define MPI2_SAS_DEVICE_INFO_FANOUT_EXPANDER (0x00000003)
/*****************************************************************************
*
* SAS Messages
*
*****************************************************************************/
/****************************************************************************
* SMP Passthrough messages
****************************************************************************/
/* SMP Passthrough Request Message */
typedef struct _MPI2_SMP_PASSTHROUGH_REQUEST
{
U8 PassthroughFlags; /* 0x00 */
U8 PhysicalPort; /* 0x01 */
U8 ChainOffset; /* 0x02 */
U8 Function; /* 0x03 */
U16 RequestDataLength; /* 0x04 */
U8 SGLFlags; /* 0x06 */
U8 MsgFlags; /* 0x07 */
U8 VP_ID; /* 0x08 */
U8 VF_ID; /* 0x09 */
U16 Reserved1; /* 0x0A */
U32 Reserved2; /* 0x0C */
U64 SASAddress; /* 0x10 */
U32 Reserved3; /* 0x18 */
U32 Reserved4; /* 0x1C */
MPI2_SIMPLE_SGE_UNION SGL; /* 0x20 */
} MPI2_SMP_PASSTHROUGH_REQUEST, MPI2_POINTER PTR_MPI2_SMP_PASSTHROUGH_REQUEST,
Mpi2SmpPassthroughRequest_t, MPI2_POINTER pMpi2SmpPassthroughRequest_t;
/* values for PassthroughFlags field */
#define MPI2_SMP_PT_REQ_PT_FLAGS_IMMEDIATE (0x80)
/* use MPI2_SGLFLAGS_ defines from mpi2.h for the SGLFlags field */
/* SMP Passthrough Reply Message */
typedef struct _MPI2_SMP_PASSTHROUGH_REPLY
{
U8 PassthroughFlags; /* 0x00 */
U8 PhysicalPort; /* 0x01 */
U8 MsgLength; /* 0x02 */
U8 Function; /* 0x03 */
U16 ResponseDataLength; /* 0x04 */
U8 SGLFlags; /* 0x06 */
U8 MsgFlags; /* 0x07 */
U8 VP_ID; /* 0x08 */
U8 VF_ID; /* 0x09 */
U16 Reserved1; /* 0x0A */
U8 Reserved2; /* 0x0C */
U8 SASStatus; /* 0x0D */
U16 IOCStatus; /* 0x0E */
U32 IOCLogInfo; /* 0x10 */
U32 Reserved3; /* 0x14 */
U8 ResponseData[4]; /* 0x18 */
} MPI2_SMP_PASSTHROUGH_REPLY, MPI2_POINTER PTR_MPI2_SMP_PASSTHROUGH_REPLY,
Mpi2SmpPassthroughReply_t, MPI2_POINTER pMpi2SmpPassthroughReply_t;
/* values for PassthroughFlags field */
#define MPI2_SMP_PT_REPLY_PT_FLAGS_IMMEDIATE (0x80)
/* values for SASStatus field are at the top of this file */
/****************************************************************************
* SATA Passthrough messages
****************************************************************************/
/* SATA Passthrough Request Message */
typedef struct _MPI2_SATA_PASSTHROUGH_REQUEST
{
U16 DevHandle; /* 0x00 */
U8 ChainOffset; /* 0x02 */
U8 Function; /* 0x03 */
U16 PassthroughFlags; /* 0x04 */
U8 SGLFlags; /* 0x06 */
U8 MsgFlags; /* 0x07 */
U8 VP_ID; /* 0x08 */
U8 VF_ID; /* 0x09 */
U16 Reserved1; /* 0x0A */
U32 Reserved2; /* 0x0C */
U32 Reserved3; /* 0x10 */
U32 Reserved4; /* 0x14 */
U32 DataLength; /* 0x18 */
U8 CommandFIS[20]; /* 0x1C */
MPI2_SGE_IO_UNION SGL; /* 0x30 */
} MPI2_SATA_PASSTHROUGH_REQUEST, MPI2_POINTER PTR_MPI2_SATA_PASSTHROUGH_REQUEST,
Mpi2SataPassthroughRequest_t, MPI2_POINTER pMpi2SataPassthroughRequest_t;
/* values for PassthroughFlags field */
#define MPI2_SATA_PT_REQ_PT_FLAGS_EXECUTE_DIAG (0x0100)
#define MPI2_SATA_PT_REQ_PT_FLAGS_DMA (0x0020)
#define MPI2_SATA_PT_REQ_PT_FLAGS_PIO (0x0010)
#define MPI2_SATA_PT_REQ_PT_FLAGS_UNSPECIFIED_VU (0x0004)
#define MPI2_SATA_PT_REQ_PT_FLAGS_WRITE (0x0002)
#define MPI2_SATA_PT_REQ_PT_FLAGS_READ (0x0001)
/* use MPI2_SGLFLAGS_ defines from mpi2.h for the SGLFlags field */
/* SATA Passthrough Reply Message */
typedef struct _MPI2_SATA_PASSTHROUGH_REPLY
{
U16 DevHandle; /* 0x00 */
U8 MsgLength; /* 0x02 */
U8 Function; /* 0x03 */
U16 PassthroughFlags; /* 0x04 */
U8 SGLFlags; /* 0x06 */
U8 MsgFlags; /* 0x07 */
U8 VP_ID; /* 0x08 */
U8 VF_ID; /* 0x09 */
U16 Reserved1; /* 0x0A */
U8 Reserved2; /* 0x0C */
U8 SASStatus; /* 0x0D */
U16 IOCStatus; /* 0x0E */
U32 IOCLogInfo; /* 0x10 */
U8 StatusFIS[20]; /* 0x14 */
U32 StatusControlRegisters; /* 0x28 */
U32 TransferCount; /* 0x2C */
} MPI2_SATA_PASSTHROUGH_REPLY, MPI2_POINTER PTR_MPI2_SATA_PASSTHROUGH_REPLY,
Mpi2SataPassthroughReply_t, MPI2_POINTER pMpi2SataPassthroughReply_t;
/* values for SASStatus field are at the top of this file */
/****************************************************************************
* SAS IO Unit Control messages
****************************************************************************/
/* SAS IO Unit Control Request Message */
typedef struct _MPI2_SAS_IOUNIT_CONTROL_REQUEST
{
U8 Operation; /* 0x00 */
U8 Reserved1; /* 0x01 */
U8 ChainOffset; /* 0x02 */
U8 Function; /* 0x03 */
U16 DevHandle; /* 0x04 */
U8 IOCParameter; /* 0x06 */
U8 MsgFlags; /* 0x07 */
U8 VP_ID; /* 0x08 */
U8 VF_ID; /* 0x09 */
U16 Reserved3; /* 0x0A */
U16 Reserved4; /* 0x0C */
U8 PhyNum; /* 0x0E */
U8 PrimFlags; /* 0x0F */
U32 Primitive; /* 0x10 */
U8 LookupMethod; /* 0x14 */
U8 Reserved5; /* 0x15 */
U16 SlotNumber; /* 0x16 */
U64 LookupAddress; /* 0x18 */
U32 IOCParameterValue; /* 0x20 */
U32 Reserved7; /* 0x24 */
U32 Reserved8; /* 0x28 */
} MPI2_SAS_IOUNIT_CONTROL_REQUEST,
MPI2_POINTER PTR_MPI2_SAS_IOUNIT_CONTROL_REQUEST,
Mpi2SasIoUnitControlRequest_t, MPI2_POINTER pMpi2SasIoUnitControlRequest_t;
/* values for the Operation field */
#define MPI2_SAS_OP_CLEAR_ALL_PERSISTENT (0x02)
#define MPI2_SAS_OP_PHY_LINK_RESET (0x06)
#define MPI2_SAS_OP_PHY_HARD_RESET (0x07)
#define MPI2_SAS_OP_PHY_CLEAR_ERROR_LOG (0x08)
#define MPI2_SAS_OP_SEND_PRIMITIVE (0x0A)
#define MPI2_SAS_OP_FORCE_FULL_DISCOVERY (0x0B)
#define MPI2_SAS_OP_TRANSMIT_PORT_SELECT_SIGNAL (0x0C)
#define MPI2_SAS_OP_REMOVE_DEVICE (0x0D)
#define MPI2_SAS_OP_LOOKUP_MAPPING (0x0E)
#define MPI2_SAS_OP_SET_IOC_PARAMETER (0x0F)
#define MPI2_SAS_OP_DEV_ENABLE_NCQ (0x14)
#define MPI2_SAS_OP_DEV_DISABLE_NCQ (0x15)
#define MPI2_SAS_OP_PRODUCT_SPECIFIC_MIN (0x80)
/* values for the PrimFlags field */
#define MPI2_SAS_PRIMFLAGS_SINGLE (0x08)
#define MPI2_SAS_PRIMFLAGS_TRIPLE (0x02)
#define MPI2_SAS_PRIMFLAGS_REDUNDANT (0x01)
/* values for the LookupMethod field */
#define MPI2_SAS_LOOKUP_METHOD_SAS_ADDRESS (0x01)
#define MPI2_SAS_LOOKUP_METHOD_SAS_ENCLOSURE_SLOT (0x02)
#define MPI2_SAS_LOOKUP_METHOD_SAS_DEVICE_NAME (0x03)
/* SAS IO Unit Control Reply Message */
typedef struct _MPI2_SAS_IOUNIT_CONTROL_REPLY
{
U8 Operation; /* 0x00 */
U8 Reserved1; /* 0x01 */
U8 MsgLength; /* 0x02 */
U8 Function; /* 0x03 */
U16 DevHandle; /* 0x04 */
U8 IOCParameter; /* 0x06 */
U8 MsgFlags; /* 0x07 */
U8 VP_ID; /* 0x08 */
U8 VF_ID; /* 0x09 */
U16 Reserved3; /* 0x0A */
U16 Reserved4; /* 0x0C */
U16 IOCStatus; /* 0x0E */
U32 IOCLogInfo; /* 0x10 */
} MPI2_SAS_IOUNIT_CONTROL_REPLY,
MPI2_POINTER PTR_MPI2_SAS_IOUNIT_CONTROL_REPLY,
Mpi2SasIoUnitControlReply_t, MPI2_POINTER pMpi2SasIoUnitControlReply_t;
#endif

View File

@ -1,481 +0,0 @@
/*
* Copyright (c) 2000-2014 LSI Corporation.
*
*
* Name: mpi2_tool.h
* Title: MPI diagnostic tool structures and definitions
* Creation Date: March 26, 2007
*
* mpi2_tool.h Version: 02.00.12
*
* Version History
* ---------------
*
* Date Version Description
* -------- -------- ------------------------------------------------------
* 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A.
* 12-18-07 02.00.01 Added Diagnostic Buffer Post and Diagnostic Release
* structures and defines.
* 02-29-08 02.00.02 Modified various names to make them 32-character unique.
* 05-06-09 02.00.03 Added ISTWI Read Write Tool and Diagnostic CLI Tool.
* 07-30-09 02.00.04 Added ExtendedType field to DiagnosticBufferPost request
* and reply messages.
* Added MPI2_DIAG_BUF_TYPE_EXTENDED.
* Incremented MPI2_DIAG_BUF_TYPE_COUNT.
* 05-12-10 02.00.05 Added Diagnostic Data Upload tool.
* 08-11-10 02.00.06 Added defines that were missing for Diagnostic Buffer
* Post Request.
* 05-25-11 02.00.07 Added Flags field and related defines to
* MPI2_TOOLBOX_ISTWI_READ_WRITE_REQUEST.
* 07-26-12 02.00.10 Modified MPI2_TOOLBOX_DIAGNOSTIC_CLI_REQUEST so that
* it uses MPI Chain SGE as well as MPI Simple SGE.
* 08-19-13 02.00.11 Added MPI2_TOOLBOX_TEXT_DISPLAY_TOOL and related info.
* 01-08-14 02.00.12 Added MPI2_TOOLBOX_CLEAN_BIT26_PRODUCT_SPECIFIC.
* --------------------------------------------------------------------------
*/
#ifndef MPI2_TOOL_H
#define MPI2_TOOL_H
/*****************************************************************************
*
* Toolbox Messages
*
*****************************************************************************/
/* defines for the Tools */
#define MPI2_TOOLBOX_CLEAN_TOOL (0x00)
#define MPI2_TOOLBOX_MEMORY_MOVE_TOOL (0x01)
#define MPI2_TOOLBOX_DIAG_DATA_UPLOAD_TOOL (0x02)
#define MPI2_TOOLBOX_ISTWI_READ_WRITE_TOOL (0x03)
#define MPI2_TOOLBOX_BEACON_TOOL (0x05)
#define MPI2_TOOLBOX_DIAGNOSTIC_CLI_TOOL (0x06)
#define MPI2_TOOLBOX_TEXT_DISPLAY_TOOL (0x07)
/****************************************************************************
* Toolbox reply
****************************************************************************/
typedef struct _MPI2_TOOLBOX_REPLY
{
U8 Tool; /* 0x00 */
U8 Reserved1; /* 0x01 */
U8 MsgLength; /* 0x02 */
U8 Function; /* 0x03 */
U16 Reserved2; /* 0x04 */
U8 Reserved3; /* 0x06 */
U8 MsgFlags; /* 0x07 */
U8 VP_ID; /* 0x08 */
U8 VF_ID; /* 0x09 */
U16 Reserved4; /* 0x0A */
U16 Reserved5; /* 0x0C */
U16 IOCStatus; /* 0x0E */
U32 IOCLogInfo; /* 0x10 */
} MPI2_TOOLBOX_REPLY, MPI2_POINTER PTR_MPI2_TOOLBOX_REPLY,
Mpi2ToolboxReply_t, MPI2_POINTER pMpi2ToolboxReply_t;
/****************************************************************************
* Toolbox Clean Tool request
****************************************************************************/
typedef struct _MPI2_TOOLBOX_CLEAN_REQUEST
{
U8 Tool; /* 0x00 */
U8 Reserved1; /* 0x01 */
U8 ChainOffset; /* 0x02 */
U8 Function; /* 0x03 */
U16 Reserved2; /* 0x04 */
U8 Reserved3; /* 0x06 */
U8 MsgFlags; /* 0x07 */
U8 VP_ID; /* 0x08 */
U8 VF_ID; /* 0x09 */
U16 Reserved4; /* 0x0A */
U32 Flags; /* 0x0C */
} MPI2_TOOLBOX_CLEAN_REQUEST, MPI2_POINTER PTR_MPI2_TOOLBOX_CLEAN_REQUEST,
Mpi2ToolboxCleanRequest_t, MPI2_POINTER pMpi2ToolboxCleanRequest_t;
/* values for the Flags field */
#define MPI2_TOOLBOX_CLEAN_BOOT_SERVICES (0x80000000)
#define MPI2_TOOLBOX_CLEAN_PERSIST_MANUFACT_PAGES (0x40000000)
#define MPI2_TOOLBOX_CLEAN_OTHER_PERSIST_PAGES (0x20000000)
#define MPI2_TOOLBOX_CLEAN_FW_CURRENT (0x10000000)
#define MPI2_TOOLBOX_CLEAN_FW_BACKUP (0x08000000)
#define MPI2_TOOLBOX_CLEAN_BIT26_PRODUCT_SPECIFIC (0x04000000)
#define MPI2_TOOLBOX_CLEAN_MEGARAID (0x02000000)
#define MPI2_TOOLBOX_CLEAN_INITIALIZATION (0x01000000)
#define MPI2_TOOLBOX_CLEAN_FLASH (0x00000004)
#define MPI2_TOOLBOX_CLEAN_SEEPROM (0x00000002)
#define MPI2_TOOLBOX_CLEAN_NVSRAM (0x00000001)
/****************************************************************************
* Toolbox Memory Move request
****************************************************************************/
typedef struct _MPI2_TOOLBOX_MEM_MOVE_REQUEST {
U8 Tool; /* 0x00 */
U8 Reserved1; /* 0x01 */
U8 ChainOffset; /* 0x02 */
U8 Function; /* 0x03 */
U16 Reserved2; /* 0x04 */
U8 Reserved3; /* 0x06 */
U8 MsgFlags; /* 0x07 */
U8 VP_ID; /* 0x08 */
U8 VF_ID; /* 0x09 */
U16 Reserved4; /* 0x0A */
MPI2_SGE_SIMPLE_UNION SGL; /* 0x0C */
} MPI2_TOOLBOX_MEM_MOVE_REQUEST, MPI2_POINTER PTR_MPI2_TOOLBOX_MEM_MOVE_REQUEST,
Mpi2ToolboxMemMoveRequest_t, MPI2_POINTER pMpi2ToolboxMemMoveRequest_t;
/****************************************************************************
* Toolbox Diagnostic Data Upload request
****************************************************************************/
typedef struct _MPI2_TOOLBOX_DIAG_DATA_UPLOAD_REQUEST {
U8 Tool; /* 0x00 */
U8 Reserved1; /* 0x01 */
U8 ChainOffset; /* 0x02 */
U8 Function; /* 0x03 */
U16 Reserved2; /* 0x04 */
U8 Reserved3; /* 0x06 */
U8 MsgFlags; /* 0x07 */
U8 VP_ID; /* 0x08 */
U8 VF_ID; /* 0x09 */
U16 Reserved4; /* 0x0A */
U8 SGLFlags; /* 0x0C */
U8 Reserved5; /* 0x0D */
U16 Reserved6; /* 0x0E */
U32 Flags; /* 0x10 */
U32 DataLength; /* 0x14 */
MPI2_SGE_SIMPLE_UNION SGL; /* 0x18 */
} MPI2_TOOLBOX_DIAG_DATA_UPLOAD_REQUEST,
MPI2_POINTER PTR_MPI2_TOOLBOX_DIAG_DATA_UPLOAD_REQUEST,
Mpi2ToolboxDiagDataUploadRequest_t,
MPI2_POINTER pMpi2ToolboxDiagDataUploadRequest_t;
/* use MPI2_SGLFLAGS_ defines from mpi2.h for the SGLFlags field */
typedef struct _MPI2_DIAG_DATA_UPLOAD_HEADER {
U32 DiagDataLength; /* 00h */
U8 FormatCode; /* 04h */
U8 Reserved1; /* 05h */
U16 Reserved2; /* 06h */
} MPI2_DIAG_DATA_UPLOAD_HEADER, MPI2_POINTER PTR_MPI2_DIAG_DATA_UPLOAD_HEADER,
Mpi2DiagDataUploadHeader_t, MPI2_POINTER pMpi2DiagDataUploadHeader_t;
/****************************************************************************
* Toolbox ISTWI Read Write Tool
****************************************************************************/
/* Toolbox ISTWI Read Write Tool request message */
typedef struct _MPI2_TOOLBOX_ISTWI_READ_WRITE_REQUEST {
U8 Tool; /* 0x00 */
U8 Reserved1; /* 0x01 */
U8 ChainOffset; /* 0x02 */
U8 Function; /* 0x03 */
U16 Reserved2; /* 0x04 */
U8 Reserved3; /* 0x06 */
U8 MsgFlags; /* 0x07 */
U8 VP_ID; /* 0x08 */
U8 VF_ID; /* 0x09 */
U16 Reserved4; /* 0x0A */
U32 Reserved5; /* 0x0C */
U32 Reserved6; /* 0x10 */
U8 DevIndex; /* 0x14 */
U8 Action; /* 0x15 */
U8 SGLFlags; /* 0x16 */
U8 Flags; /* 0x17 */
U16 TxDataLength; /* 0x18 */
U16 RxDataLength; /* 0x1A */
U32 Reserved8; /* 0x1C */
U32 Reserved9; /* 0x20 */
U32 Reserved10; /* 0x24 */
U32 Reserved11; /* 0x28 */
U32 Reserved12; /* 0x2C */
MPI2_SGE_SIMPLE_UNION SGL; /* 0x30 */
} MPI2_TOOLBOX_ISTWI_READ_WRITE_REQUEST,
MPI2_POINTER PTR_MPI2_TOOLBOX_ISTWI_READ_WRITE_REQUEST,
Mpi2ToolboxIstwiReadWriteRequest_t,
MPI2_POINTER pMpi2ToolboxIstwiReadWriteRequest_t;
/* values for the Action field */
#define MPI2_TOOL_ISTWI_ACTION_READ_DATA (0x01)
#define MPI2_TOOL_ISTWI_ACTION_WRITE_DATA (0x02)
#define MPI2_TOOL_ISTWI_ACTION_SEQUENCE (0x03)
#define MPI2_TOOL_ISTWI_ACTION_RESERVE_BUS (0x10)
#define MPI2_TOOL_ISTWI_ACTION_RELEASE_BUS (0x11)
#define MPI2_TOOL_ISTWI_ACTION_RESET (0x12)
/* use MPI2_SGLFLAGS_ defines from mpi2.h for the SGLFlags field */
/* values for the Flags field */
#define MPI2_TOOL_ISTWI_FLAG_AUTO_RESERVE_RELEASE (0x80)
#define MPI2_TOOL_ISTWI_FLAG_PAGE_ADDR_MASK (0x07)
/* Toolbox ISTWI Read Write Tool reply message */
typedef struct _MPI2_TOOLBOX_ISTWI_REPLY {
U8 Tool; /* 0x00 */
U8 Reserved1; /* 0x01 */
U8 MsgLength; /* 0x02 */
U8 Function; /* 0x03 */
U16 Reserved2; /* 0x04 */
U8 Reserved3; /* 0x06 */
U8 MsgFlags; /* 0x07 */
U8 VP_ID; /* 0x08 */
U8 VF_ID; /* 0x09 */
U16 Reserved4; /* 0x0A */
U16 Reserved5; /* 0x0C */
U16 IOCStatus; /* 0x0E */
U32 IOCLogInfo; /* 0x10 */
U8 DevIndex; /* 0x14 */
U8 Action; /* 0x15 */
U8 IstwiStatus; /* 0x16 */
U8 Reserved6; /* 0x17 */
U16 TxDataCount; /* 0x18 */
U16 RxDataCount; /* 0x1A */
} MPI2_TOOLBOX_ISTWI_REPLY, MPI2_POINTER PTR_MPI2_TOOLBOX_ISTWI_REPLY,
Mpi2ToolboxIstwiReply_t, MPI2_POINTER pMpi2ToolboxIstwiReply_t;
/****************************************************************************
* Toolbox Beacon Tool request
****************************************************************************/
typedef struct _MPI2_TOOLBOX_BEACON_REQUEST
{
U8 Tool; /* 0x00 */
U8 Reserved1; /* 0x01 */
U8 ChainOffset; /* 0x02 */
U8 Function; /* 0x03 */
U16 Reserved2; /* 0x04 */
U8 Reserved3; /* 0x06 */
U8 MsgFlags; /* 0x07 */
U8 VP_ID; /* 0x08 */
U8 VF_ID; /* 0x09 */
U16 Reserved4; /* 0x0A */
U8 Reserved5; /* 0x0C */
U8 PhysicalPort; /* 0x0D */
U8 Reserved6; /* 0x0E */
U8 Flags; /* 0x0F */
} MPI2_TOOLBOX_BEACON_REQUEST, MPI2_POINTER PTR_MPI2_TOOLBOX_BEACON_REQUEST,
Mpi2ToolboxBeaconRequest_t, MPI2_POINTER pMpi2ToolboxBeaconRequest_t;
/* values for the Flags field */
#define MPI2_TOOLBOX_FLAGS_BEACONMODE_OFF (0x00)
#define MPI2_TOOLBOX_FLAGS_BEACONMODE_ON (0x01)
/****************************************************************************
* Toolbox Diagnostic CLI Tool
****************************************************************************/
#define MPI2_TOOLBOX_DIAG_CLI_CMD_LENGTH (0x5C)
/* MPI v2.0 Toolbox Diagnostic CLI Tool request message */
typedef struct _MPI2_TOOLBOX_DIAGNOSTIC_CLI_REQUEST {
U8 Tool; /* 0x00 */
U8 Reserved1; /* 0x01 */
U8 ChainOffset; /* 0x02 */
U8 Function; /* 0x03 */
U16 Reserved2; /* 0x04 */
U8 Reserved3; /* 0x06 */
U8 MsgFlags; /* 0x07 */
U8 VP_ID; /* 0x08 */
U8 VF_ID; /* 0x09 */
U16 Reserved4; /* 0x0A */
U8 SGLFlags; /* 0x0C */
U8 Reserved5; /* 0x0D */
U16 Reserved6; /* 0x0E */
U32 DataLength; /* 0x10 */
U8 DiagnosticCliCommand
[MPI2_TOOLBOX_DIAG_CLI_CMD_LENGTH]; /* 0x14 */
MPI2_MPI_SGE_IO_UNION SGL; /* 0x70 */
} MPI2_TOOLBOX_DIAGNOSTIC_CLI_REQUEST,
MPI2_POINTER PTR_MPI2_TOOLBOX_DIAGNOSTIC_CLI_REQUEST,
Mpi2ToolboxDiagnosticCliRequest_t,
MPI2_POINTER pMpi2ToolboxDiagnosticCliRequest_t;
/* use MPI2_SGLFLAGS_ defines from mpi2.h for the SGLFlags field */
/* Toolbox Diagnostic CLI Tool reply message */
typedef struct _MPI2_TOOLBOX_DIAGNOSTIC_CLI_REPLY {
U8 Tool; /* 0x00 */
U8 Reserved1; /* 0x01 */
U8 MsgLength; /* 0x02 */
U8 Function; /* 0x03 */
U16 Reserved2; /* 0x04 */
U8 Reserved3; /* 0x06 */
U8 MsgFlags; /* 0x07 */
U8 VP_ID; /* 0x08 */
U8 VF_ID; /* 0x09 */
U16 Reserved4; /* 0x0A */
U16 Reserved5; /* 0x0C */
U16 IOCStatus; /* 0x0E */
U32 IOCLogInfo; /* 0x10 */
U32 ReturnedDataLength; /* 0x14 */
} MPI2_TOOLBOX_DIAGNOSTIC_CLI_REPLY,
MPI2_POINTER PTR_MPI2_TOOLBOX_DIAG_CLI_REPLY,
Mpi2ToolboxDiagnosticCliReply_t,
MPI2_POINTER pMpi2ToolboxDiagnosticCliReply_t;
/****************************************************************************
* Toolbox Console Text Display Tool
****************************************************************************/
/* Toolbox Console Text Display Tool request message */
typedef struct _MPI2_TOOLBOX_TEXT_DISPLAY_REQUEST {
U8 Tool; /* 0x00 */
U8 Reserved1; /* 0x01 */
U8 ChainOffset; /* 0x02 */
U8 Function; /* 0x03 */
U16 Reserved2; /* 0x04 */
U8 Reserved3; /* 0x06 */
U8 MsgFlags; /* 0x07 */
U8 VP_ID; /* 0x08 */
U8 VF_ID; /* 0x09 */
U16 Reserved4; /* 0x0A */
U8 Console; /* 0x0C */
U8 Flags; /* 0x0D */
U16 Reserved6; /* 0x0E */
U8 TextToDisplay[4]; /* 0x10 */
} MPI2_TOOLBOX_TEXT_DISPLAY_REQUEST,
MPI2_POINTER PTR_MPI2_TOOLBOX_TEXT_DISPLAY_REQUEST,
Mpi2ToolboxTextDisplayRequest_t,
MPI2_POINTER pMpi2ToolboxTextDisplayRequest_t;
/* defines for the Console field */
#define MPI2_TOOLBOX_CONSOLE_TYPE_MASK (0xF0)
#define MPI2_TOOLBOX_CONSOLE_TYPE_DEFAULT (0x00)
#define MPI2_TOOLBOX_CONSOLE_TYPE_UART (0x10)
#define MPI2_TOOLBOX_CONSOLE_TYPE_ETHERNET (0x20)
#define MPI2_TOOLBOX_CONSOLE_NUMBER_MASK (0x0F)
/* defines for the Flags field */
#define MPI2_TOOLBOX_CONSOLE_FLAG_TIMESTAMP (0x01)
/*****************************************************************************
*
* Diagnostic Buffer Messages
*
*****************************************************************************/
/****************************************************************************
* Diagnostic Buffer Post request
****************************************************************************/
typedef struct _MPI2_DIAG_BUFFER_POST_REQUEST
{
U8 ExtendedType; /* 0x00 */
U8 BufferType; /* 0x01 */
U8 ChainOffset; /* 0x02 */
U8 Function; /* 0x03 */
U16 Reserved2; /* 0x04 */
U8 Reserved3; /* 0x06 */
U8 MsgFlags; /* 0x07 */
U8 VP_ID; /* 0x08 */
U8 VF_ID; /* 0x09 */
U16 Reserved4; /* 0x0A */
U64 BufferAddress; /* 0x0C */
U32 BufferLength; /* 0x14 */
U32 Reserved5; /* 0x18 */
U32 Reserved6; /* 0x1C */
U32 Flags; /* 0x20 */
U32 ProductSpecific[23]; /* 0x24 */
} MPI2_DIAG_BUFFER_POST_REQUEST, MPI2_POINTER PTR_MPI2_DIAG_BUFFER_POST_REQUEST,
Mpi2DiagBufferPostRequest_t, MPI2_POINTER pMpi2DiagBufferPostRequest_t;
/* values for the ExtendedType field */
#define MPI2_DIAG_EXTENDED_TYPE_UTILIZATION (0x02)
/* values for the BufferType field */
#define MPI2_DIAG_BUF_TYPE_TRACE (0x00)
#define MPI2_DIAG_BUF_TYPE_SNAPSHOT (0x01)
#define MPI2_DIAG_BUF_TYPE_EXTENDED (0x02)
/* count of the number of buffer types */
#define MPI2_DIAG_BUF_TYPE_COUNT (0x03)
/* values for the Flags field */
#define MPI2_DIAG_BUF_FLAG_RELEASE_ON_FULL (0x00000002)
#define MPI2_DIAG_BUF_FLAG_IMMEDIATE_RELEASE (0x00000001)
/****************************************************************************
* Diagnostic Buffer Post reply
****************************************************************************/
typedef struct _MPI2_DIAG_BUFFER_POST_REPLY
{
U8 ExtendedType; /* 0x00 */
U8 BufferType; /* 0x01 */
U8 MsgLength; /* 0x02 */
U8 Function; /* 0x03 */
U16 Reserved2; /* 0x04 */
U8 Reserved3; /* 0x06 */
U8 MsgFlags; /* 0x07 */
U8 VP_ID; /* 0x08 */
U8 VF_ID; /* 0x09 */
U16 Reserved4; /* 0x0A */
U16 Reserved5; /* 0x0C */
U16 IOCStatus; /* 0x0E */
U32 IOCLogInfo; /* 0x10 */
U32 TransferLength; /* 0x14 */
} MPI2_DIAG_BUFFER_POST_REPLY, MPI2_POINTER PTR_MPI2_DIAG_BUFFER_POST_REPLY,
Mpi2DiagBufferPostReply_t, MPI2_POINTER pMpi2DiagBufferPostReply_t;
/****************************************************************************
* Diagnostic Release request
****************************************************************************/
typedef struct _MPI2_DIAG_RELEASE_REQUEST
{
U8 Reserved1; /* 0x00 */
U8 BufferType; /* 0x01 */
U8 ChainOffset; /* 0x02 */
U8 Function; /* 0x03 */
U16 Reserved2; /* 0x04 */
U8 Reserved3; /* 0x06 */
U8 MsgFlags; /* 0x07 */
U8 VP_ID; /* 0x08 */
U8 VF_ID; /* 0x09 */
U16 Reserved4; /* 0x0A */
} MPI2_DIAG_RELEASE_REQUEST, MPI2_POINTER PTR_MPI2_DIAG_RELEASE_REQUEST,
Mpi2DiagReleaseRequest_t, MPI2_POINTER pMpi2DiagReleaseRequest_t;
/****************************************************************************
* Diagnostic Buffer Post reply
****************************************************************************/
typedef struct _MPI2_DIAG_RELEASE_REPLY
{
U8 Reserved1; /* 0x00 */
U8 BufferType; /* 0x01 */
U8 MsgLength; /* 0x02 */
U8 Function; /* 0x03 */
U16 Reserved2; /* 0x04 */
U8 Reserved3; /* 0x06 */
U8 MsgFlags; /* 0x07 */
U8 VP_ID; /* 0x08 */
U8 VF_ID; /* 0x09 */
U16 Reserved4; /* 0x0A */
U16 Reserved5; /* 0x0C */
U16 IOCStatus; /* 0x0E */
U32 IOCLogInfo; /* 0x10 */
} MPI2_DIAG_RELEASE_REPLY, MPI2_POINTER PTR_MPI2_DIAG_RELEASE_REPLY,
Mpi2DiagReleaseReply_t, MPI2_POINTER pMpi2DiagReleaseReply_t;
#endif

View File

@ -1,61 +0,0 @@
/*
* Copyright (c) 2000-2014 LSI Corporation.
*
*
* Name: mpi2_type.h
* Title: MPI basic type definitions
* Creation Date: August 16, 2006
*
* mpi2_type.h Version: 02.00.00
*
* Version History
* ---------------
*
* Date Version Description
* -------- -------- ------------------------------------------------------
* 04-30-07 02.00.00 Corresponds to Fusion-MPT MPI Specification Rev A.
* --------------------------------------------------------------------------
*/
#ifndef MPI2_TYPE_H
#define MPI2_TYPE_H
/*******************************************************************************
* Define MPI2_POINTER if it hasn't already been defined. By default
* MPI2_POINTER is defined to be a near pointer. MPI2_POINTER can be defined as
* a far pointer by defining MPI2_POINTER as "far *" before this header file is
* included.
*/
#ifndef MPI2_POINTER
#define MPI2_POINTER *
#endif
/* the basic types may have already been included by mpi_type.h */
#ifndef MPI_TYPE_H
/*****************************************************************************
*
* Basic Types
*
*****************************************************************************/
typedef u8 U8;
typedef __le16 U16;
typedef __le32 U32;
typedef __le64 U64 __attribute__((aligned(4)));
/*****************************************************************************
*
* Pointer Types
*
*****************************************************************************/
typedef U8 *PU8;
typedef U16 *PU16;
typedef U32 *PU32;
typedef U64 *PU64;
#endif
#endif

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,419 +0,0 @@
/*
* Management Module Support for MPT (Message Passing Technology) based
* controllers
*
* This code is based on drivers/scsi/mpt2sas/mpt2_ctl.h
* Copyright (C) 2007-2014 LSI Corporation
* Copyright (C) 20013-2014 Avago Technologies
* (mailto: MPT-FusionLinux.pdl@avagotech.com)
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* NO WARRANTY
* THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT
* LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT,
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is
* solely responsible for determining the appropriateness of using and
* distributing the Program and assumes all risks associated with its
* exercise of rights under this Agreement, including but not limited to
* the risks and costs of program errors, damage to or loss of data,
* programs or equipment, and unavailability or interruption of operations.
* DISCLAIMER OF LIABILITY
* NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
* TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
* USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED
* HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301,
* USA.
*/
#ifndef MPT2SAS_CTL_H_INCLUDED
#define MPT2SAS_CTL_H_INCLUDED
#ifdef __KERNEL__
#include <linux/miscdevice.h>
#endif
#define MPT2SAS_DEV_NAME "mpt2ctl"
#define MPT2_MAGIC_NUMBER 'L'
#define MPT2_IOCTL_DEFAULT_TIMEOUT (10) /* in seconds */
/**
* IOCTL opcodes
*/
#define MPT2IOCINFO _IOWR(MPT2_MAGIC_NUMBER, 17, \
struct mpt2_ioctl_iocinfo)
#define MPT2COMMAND _IOWR(MPT2_MAGIC_NUMBER, 20, \
struct mpt2_ioctl_command)
#ifdef CONFIG_COMPAT
#define MPT2COMMAND32 _IOWR(MPT2_MAGIC_NUMBER, 20, \
struct mpt2_ioctl_command32)
#endif
#define MPT2EVENTQUERY _IOWR(MPT2_MAGIC_NUMBER, 21, \
struct mpt2_ioctl_eventquery)
#define MPT2EVENTENABLE _IOWR(MPT2_MAGIC_NUMBER, 22, \
struct mpt2_ioctl_eventenable)
#define MPT2EVENTREPORT _IOWR(MPT2_MAGIC_NUMBER, 23, \
struct mpt2_ioctl_eventreport)
#define MPT2HARDRESET _IOWR(MPT2_MAGIC_NUMBER, 24, \
struct mpt2_ioctl_diag_reset)
#define MPT2BTDHMAPPING _IOWR(MPT2_MAGIC_NUMBER, 31, \
struct mpt2_ioctl_btdh_mapping)
/* diag buffer support */
#define MPT2DIAGREGISTER _IOWR(MPT2_MAGIC_NUMBER, 26, \
struct mpt2_diag_register)
#define MPT2DIAGRELEASE _IOWR(MPT2_MAGIC_NUMBER, 27, \
struct mpt2_diag_release)
#define MPT2DIAGUNREGISTER _IOWR(MPT2_MAGIC_NUMBER, 28, \
struct mpt2_diag_unregister)
#define MPT2DIAGQUERY _IOWR(MPT2_MAGIC_NUMBER, 29, \
struct mpt2_diag_query)
#define MPT2DIAGREADBUFFER _IOWR(MPT2_MAGIC_NUMBER, 30, \
struct mpt2_diag_read_buffer)
/**
* struct mpt2_ioctl_header - main header structure
* @ioc_number - IOC unit number
* @port_number - IOC port number
* @max_data_size - maximum number bytes to transfer on read
*/
struct mpt2_ioctl_header {
uint32_t ioc_number;
uint32_t port_number;
uint32_t max_data_size;
};
/**
* struct mpt2_ioctl_diag_reset - diagnostic reset
* @hdr - generic header
*/
struct mpt2_ioctl_diag_reset {
struct mpt2_ioctl_header hdr;
};
/**
* struct mpt2_ioctl_pci_info - pci device info
* @device - pci device id
* @function - pci function id
* @bus - pci bus id
* @segment_id - pci segment id
*/
struct mpt2_ioctl_pci_info {
union {
struct {
uint32_t device:5;
uint32_t function:3;
uint32_t bus:24;
} bits;
uint32_t word;
} u;
uint32_t segment_id;
};
#define MPT2_IOCTL_INTERFACE_SCSI (0x00)
#define MPT2_IOCTL_INTERFACE_FC (0x01)
#define MPT2_IOCTL_INTERFACE_FC_IP (0x02)
#define MPT2_IOCTL_INTERFACE_SAS (0x03)
#define MPT2_IOCTL_INTERFACE_SAS2 (0x04)
#define MPT2_IOCTL_INTERFACE_SAS2_SSS6200 (0x05)
#define MPT2_IOCTL_VERSION_LENGTH (32)
/**
* struct mpt2_ioctl_iocinfo - generic controller info
* @hdr - generic header
* @adapter_type - type of adapter (spi, fc, sas)
* @port_number - port number
* @pci_id - PCI Id
* @hw_rev - hardware revision
* @sub_system_device - PCI subsystem Device ID
* @sub_system_vendor - PCI subsystem Vendor ID
* @rsvd0 - reserved
* @firmware_version - firmware version
* @bios_version - BIOS version
* @driver_version - driver version - 32 ASCII characters
* @rsvd1 - reserved
* @scsi_id - scsi id of adapter 0
* @rsvd2 - reserved
* @pci_information - pci info (2nd revision)
*/
struct mpt2_ioctl_iocinfo {
struct mpt2_ioctl_header hdr;
uint32_t adapter_type;
uint32_t port_number;
uint32_t pci_id;
uint32_t hw_rev;
uint32_t subsystem_device;
uint32_t subsystem_vendor;
uint32_t rsvd0;
uint32_t firmware_version;
uint32_t bios_version;
uint8_t driver_version[MPT2_IOCTL_VERSION_LENGTH];
uint8_t rsvd1;
uint8_t scsi_id;
uint16_t rsvd2;
struct mpt2_ioctl_pci_info pci_information;
};
/* number of event log entries */
#define MPT2SAS_CTL_EVENT_LOG_SIZE (50)
/**
* struct mpt2_ioctl_eventquery - query event count and type
* @hdr - generic header
* @event_entries - number of events returned by get_event_report
* @rsvd - reserved
* @event_types - type of events currently being captured
*/
struct mpt2_ioctl_eventquery {
struct mpt2_ioctl_header hdr;
uint16_t event_entries;
uint16_t rsvd;
uint32_t event_types[MPI2_EVENT_NOTIFY_EVENTMASK_WORDS];
};
/**
* struct mpt2_ioctl_eventenable - enable/disable event capturing
* @hdr - generic header
* @event_types - toggle off/on type of events to be captured
*/
struct mpt2_ioctl_eventenable {
struct mpt2_ioctl_header hdr;
uint32_t event_types[4];
};
#define MPT2_EVENT_DATA_SIZE (192)
/**
* struct MPT2_IOCTL_EVENTS -
* @event - the event that was reported
* @context - unique value for each event assigned by driver
* @data - event data returned in fw reply message
*/
struct MPT2_IOCTL_EVENTS {
uint32_t event;
uint32_t context;
uint8_t data[MPT2_EVENT_DATA_SIZE];
};
/**
* struct mpt2_ioctl_eventreport - returing event log
* @hdr - generic header
* @event_data - (see struct MPT2_IOCTL_EVENTS)
*/
struct mpt2_ioctl_eventreport {
struct mpt2_ioctl_header hdr;
struct MPT2_IOCTL_EVENTS event_data[1];
};
/**
* struct mpt2_ioctl_command - generic mpt firmware passthru ioctl
* @hdr - generic header
* @timeout - command timeout in seconds. (if zero then use driver default
* value).
* @reply_frame_buf_ptr - reply location
* @data_in_buf_ptr - destination for read
* @data_out_buf_ptr - data source for write
* @sense_data_ptr - sense data location
* @max_reply_bytes - maximum number of reply bytes to be sent to app.
* @data_in_size - number bytes for data transfer in (read)
* @data_out_size - number bytes for data transfer out (write)
* @max_sense_bytes - maximum number of bytes for auto sense buffers
* @data_sge_offset - offset in words from the start of the request message to
* the first SGL
* @mf[1];
*/
struct mpt2_ioctl_command {
struct mpt2_ioctl_header hdr;
uint32_t timeout;
void __user *reply_frame_buf_ptr;
void __user *data_in_buf_ptr;
void __user *data_out_buf_ptr;
void __user *sense_data_ptr;
uint32_t max_reply_bytes;
uint32_t data_in_size;
uint32_t data_out_size;
uint32_t max_sense_bytes;
uint32_t data_sge_offset;
uint8_t mf[1];
};
#ifdef CONFIG_COMPAT
struct mpt2_ioctl_command32 {
struct mpt2_ioctl_header hdr;
uint32_t timeout;
uint32_t reply_frame_buf_ptr;
uint32_t data_in_buf_ptr;
uint32_t data_out_buf_ptr;
uint32_t sense_data_ptr;
uint32_t max_reply_bytes;
uint32_t data_in_size;
uint32_t data_out_size;
uint32_t max_sense_bytes;
uint32_t data_sge_offset;
uint8_t mf[1];
};
#endif
/**
* struct mpt2_ioctl_btdh_mapping - mapping info
* @hdr - generic header
* @id - target device identification number
* @bus - SCSI bus number that the target device exists on
* @handle - device handle for the target device
* @rsvd - reserved
*
* To obtain a bus/id the application sets
* handle to valid handle, and bus/id to 0xFFFF.
*
* To obtain the device handle the application sets
* bus/id valid value, and the handle to 0xFFFF.
*/
struct mpt2_ioctl_btdh_mapping {
struct mpt2_ioctl_header hdr;
uint32_t id;
uint32_t bus;
uint16_t handle;
uint16_t rsvd;
};
/* status bits for ioc->diag_buffer_status */
#define MPT2_DIAG_BUFFER_IS_REGISTERED (0x01)
#define MPT2_DIAG_BUFFER_IS_RELEASED (0x02)
#define MPT2_DIAG_BUFFER_IS_DIAG_RESET (0x04)
/* application flags for mpt2_diag_register, mpt2_diag_query */
#define MPT2_APP_FLAGS_APP_OWNED (0x0001)
#define MPT2_APP_FLAGS_BUFFER_VALID (0x0002)
#define MPT2_APP_FLAGS_FW_BUFFER_ACCESS (0x0004)
/* flags for mpt2_diag_read_buffer */
#define MPT2_FLAGS_REREGISTER (0x0001)
#define MPT2_PRODUCT_SPECIFIC_DWORDS 23
/**
* struct mpt2_diag_register - application register with driver
* @hdr - generic header
* @reserved -
* @buffer_type - specifies either TRACE, SNAPSHOT, or EXTENDED
* @application_flags - misc flags
* @diagnostic_flags - specifies flags affecting command processing
* @product_specific - product specific information
* @requested_buffer_size - buffers size in bytes
* @unique_id - tag specified by application that is used to signal ownership
* of the buffer.
*
* This will allow the driver to setup any required buffers that will be
* needed by firmware to communicate with the driver.
*/
struct mpt2_diag_register {
struct mpt2_ioctl_header hdr;
uint8_t reserved;
uint8_t buffer_type;
uint16_t application_flags;
uint32_t diagnostic_flags;
uint32_t product_specific[MPT2_PRODUCT_SPECIFIC_DWORDS];
uint32_t requested_buffer_size;
uint32_t unique_id;
};
/**
* struct mpt2_diag_unregister - application unregister with driver
* @hdr - generic header
* @unique_id - tag uniquely identifies the buffer to be unregistered
*
* This will allow the driver to cleanup any memory allocated for diag
* messages and to free up any resources.
*/
struct mpt2_diag_unregister {
struct mpt2_ioctl_header hdr;
uint32_t unique_id;
};
/**
* struct mpt2_diag_query - query relevant info associated with diag buffers
* @hdr - generic header
* @reserved -
* @buffer_type - specifies either TRACE, SNAPSHOT, or EXTENDED
* @application_flags - misc flags
* @diagnostic_flags - specifies flags affecting command processing
* @product_specific - product specific information
* @total_buffer_size - diag buffer size in bytes
* @driver_added_buffer_size - size of extra space appended to end of buffer
* @unique_id - unique id associated with this buffer.
*
* The application will send only buffer_type and unique_id. Driver will
* inspect unique_id first, if valid, fill in all the info. If unique_id is
* 0x00, the driver will return info specified by Buffer Type.
*/
struct mpt2_diag_query {
struct mpt2_ioctl_header hdr;
uint8_t reserved;
uint8_t buffer_type;
uint16_t application_flags;
uint32_t diagnostic_flags;
uint32_t product_specific[MPT2_PRODUCT_SPECIFIC_DWORDS];
uint32_t total_buffer_size;
uint32_t driver_added_buffer_size;
uint32_t unique_id;
};
/**
* struct mpt2_diag_release - request to send Diag Release Message to firmware
* @hdr - generic header
* @unique_id - tag uniquely identifies the buffer to be released
*
* This allows ownership of the specified buffer to returned to the driver,
* allowing an application to read the buffer without fear that firmware is
* overwritting information in the buffer.
*/
struct mpt2_diag_release {
struct mpt2_ioctl_header hdr;
uint32_t unique_id;
};
/**
* struct mpt2_diag_read_buffer - request for copy of the diag buffer
* @hdr - generic header
* @status -
* @reserved -
* @flags - misc flags
* @starting_offset - starting offset within drivers buffer where to start
* reading data at into the specified application buffer
* @bytes_to_read - number of bytes to copy from the drivers buffer into the
* application buffer starting at starting_offset.
* @unique_id - unique id associated with this buffer.
* @diagnostic_data - data payload
*/
struct mpt2_diag_read_buffer {
struct mpt2_ioctl_header hdr;
uint8_t status;
uint8_t reserved;
uint16_t flags;
uint32_t starting_offset;
uint32_t bytes_to_read;
uint32_t unique_id;
uint32_t diagnostic_data[1];
};
#endif /* MPT2SAS_CTL_H_INCLUDED */

View File

@ -1,182 +0,0 @@
/*
* Logging Support for MPT (Message Passing Technology) based controllers
*
* This code is based on drivers/scsi/mpt2sas/mpt2_debug.c
* Copyright (C) 2007-2014 LSI Corporation
* Copyright (C) 20013-2014 Avago Technologies
* (mailto: MPT-FusionLinux.pdl@avagotech.com)
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* NO WARRANTY
* THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT
* LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT,
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is
* solely responsible for determining the appropriateness of using and
* distributing the Program and assumes all risks associated with its
* exercise of rights under this Agreement, including but not limited to
* the risks and costs of program errors, damage to or loss of data,
* programs or equipment, and unavailability or interruption of operations.
* DISCLAIMER OF LIABILITY
* NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
* TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
* USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED
* HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301,
* USA.
*/
#ifndef MPT2SAS_DEBUG_H_INCLUDED
#define MPT2SAS_DEBUG_H_INCLUDED
#define MPT_DEBUG 0x00000001
#define MPT_DEBUG_MSG_FRAME 0x00000002
#define MPT_DEBUG_SG 0x00000004
#define MPT_DEBUG_EVENTS 0x00000008
#define MPT_DEBUG_EVENT_WORK_TASK 0x00000010
#define MPT_DEBUG_INIT 0x00000020
#define MPT_DEBUG_EXIT 0x00000040
#define MPT_DEBUG_FAIL 0x00000080
#define MPT_DEBUG_TM 0x00000100
#define MPT_DEBUG_REPLY 0x00000200
#define MPT_DEBUG_HANDSHAKE 0x00000400
#define MPT_DEBUG_CONFIG 0x00000800
#define MPT_DEBUG_DL 0x00001000
#define MPT_DEBUG_RESET 0x00002000
#define MPT_DEBUG_SCSI 0x00004000
#define MPT_DEBUG_IOCTL 0x00008000
#define MPT_DEBUG_CSMISAS 0x00010000
#define MPT_DEBUG_SAS 0x00020000
#define MPT_DEBUG_TRANSPORT 0x00040000
#define MPT_DEBUG_TASK_SET_FULL 0x00080000
#define MPT_DEBUG_TARGET_MODE 0x00100000
/*
* CONFIG_SCSI_MPT2SAS_LOGGING - enabled in Kconfig
*/
#ifdef CONFIG_SCSI_MPT2SAS_LOGGING
#define MPT_CHECK_LOGGING(IOC, CMD, BITS) \
{ \
if (IOC->logging_level & BITS) \
CMD; \
}
#else
#define MPT_CHECK_LOGGING(IOC, CMD, BITS)
#endif /* CONFIG_SCSI_MPT2SAS_LOGGING */
/*
* debug macros
*/
#define dprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG)
#define dsgprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_SG)
#define devtprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_EVENTS)
#define dewtprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_EVENT_WORK_TASK)
#define dinitprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_INIT)
#define dexitprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_EXIT)
#define dfailprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_FAIL)
#define dtmprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_TM)
#define dreplyprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_REPLY)
#define dhsprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_HANDSHAKE)
#define dcprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_CONFIG)
#define ddlprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_DL)
#define drsprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_RESET)
#define dsprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_SCSI)
#define dctlprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_IOCTL)
#define dcsmisasprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_CSMISAS)
#define dsasprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_SAS)
#define dsastransport(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_SAS_WIDE)
#define dmfprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_MSG_FRAME)
#define dtsfprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_TASK_SET_FULL)
#define dtransportprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_TRANSPORT)
#define dTMprintk(IOC, CMD) \
MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_TARGET_MODE)
/* inline functions for dumping debug data*/
#ifdef CONFIG_SCSI_MPT2SAS_LOGGING
/**
* _debug_dump_mf - print message frame contents
* @mpi_request: pointer to message frame
* @sz: number of dwords
*/
static inline void
_debug_dump_mf(void *mpi_request, int sz)
{
int i;
__le32 *mfp = (__le32 *)mpi_request;
printk(KERN_INFO "mf:\n\t");
for (i = 0; i < sz; i++) {
if (i && ((i % 8) == 0))
printk("\n\t");
printk("%08x ", le32_to_cpu(mfp[i]));
}
printk("\n");
}
#else
#define _debug_dump_mf(mpi_request, sz)
#endif /* CONFIG_SCSI_MPT2SAS_LOGGING */
#endif /* MPT2SAS_DEBUG_H_INCLUDED */

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -41,15 +41,15 @@
# USA.
config SCSI_MPT3SAS
tristate "LSI MPT Fusion SAS 3.0 Device Driver"
tristate "LSI MPT Fusion SAS 3.0 & SAS 2.0 Device Driver"
depends on PCI && SCSI
select SCSI_SAS_ATTRS
select RAID_ATTRS
---help---
This driver supports PCI-Express SAS 12Gb/s Host Adapters.
config SCSI_MPT3SAS_MAX_SGE
int "LSI MPT Fusion Max number of SG Entries (16 - 256)"
config SCSI_MPT2SAS_MAX_SGE
int "LSI MPT Fusion SAS 2.0 Max number of SG Entries (16 - 256)"
depends on PCI && SCSI && SCSI_MPT3SAS
default "128"
range 16 256
@ -60,8 +60,14 @@ config SCSI_MPT3SAS_MAX_SGE
can be 256. However, it may decreased down to 16. Decreasing this
parameter will reduce memory requirements on a per controller instance.
config SCSI_MPT3SAS_LOGGING
bool "LSI MPT Fusion logging facility"
config SCSI_MPT3SAS_MAX_SGE
int "LSI MPT Fusion SAS 3.0 Max number of SG Entries (16 - 256)"
depends on PCI && SCSI && SCSI_MPT3SAS
default "128"
range 16 256
---help---
This turns on a logging facility.
This option allows you to specify the maximum number of scatter-
gather entries per I/O. The driver default is 128, which matches
MAX_PHYS_SEGMENTS in most kernels. However in SuSE kernels this
can be 256. However, it may decreased down to 16. Decreasing this
parameter will reduce memory requirements on a per controller instance.

View File

@ -5,4 +5,5 @@ mpt3sas-y += mpt3sas_base.o \
mpt3sas_scsih.o \
mpt3sas_transport.o \
mpt3sas_ctl.o \
mpt3sas_trigger_diag.o
mpt3sas_trigger_diag.o \
mpt3sas_warpdrive.o

View File

@ -108,9 +108,12 @@ _scsih_set_fwfault_debug(const char *val, struct kernel_param *kp)
if (ret)
return ret;
/* global ioc spinlock to protect controller list on list operations */
pr_info("setting fwfault_debug(%d)\n", mpt3sas_fwfault_debug);
spin_lock(&gioc_lock);
list_for_each_entry(ioc, &mpt3sas_ioc_list, list)
ioc->fwfault_debug = mpt3sas_fwfault_debug;
spin_unlock(&gioc_lock);
return 0;
}
module_param_call(mpt3sas_fwfault_debug, _scsih_set_fwfault_debug,
@ -157,7 +160,7 @@ _base_fault_reset_work(struct work_struct *work)
spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags);
if (ioc->shost_recovery)
if (ioc->shost_recovery || ioc->pci_error_recovery)
goto rearm_timer;
spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
@ -166,6 +169,20 @@ _base_fault_reset_work(struct work_struct *work)
pr_err(MPT3SAS_FMT "SAS host is non-operational !!!!\n",
ioc->name);
/* It may be possible that EEH recovery can resolve some of
* pci bus failure issues rather removing the dead ioc function
* by considering controller is in a non-operational state. So
* here priority is given to the EEH recovery. If it doesn't
* not resolve this issue, mpt3sas driver will consider this
* controller to non-operational state and remove the dead ioc
* function.
*/
if (ioc->non_operational_loop++ < 5) {
spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock,
flags);
goto rearm_timer;
}
/*
* Call _scsih_flush_pending_cmds callback so that we flush all
* pending commands back to OS. This call is required to aovid
@ -181,7 +198,7 @@ _base_fault_reset_work(struct work_struct *work)
ioc->remove_host = 1;
/*Remove the Dead Host */
p = kthread_run(mpt3sas_remove_dead_ioc_func, ioc,
"mpt3sas_dead_ioc_%d", ioc->id);
"%s_dead_ioc_%d", ioc->driver_name, ioc->id);
if (IS_ERR(p))
pr_err(MPT3SAS_FMT
"%s: Running mpt3sas_dead_ioc thread failed !!!!\n",
@ -193,6 +210,8 @@ _base_fault_reset_work(struct work_struct *work)
return; /* don't rearm timer */
}
ioc->non_operational_loop = 0;
if ((doorbell & MPI2_IOC_STATE_MASK) != MPI2_IOC_STATE_OPERATIONAL) {
rc = mpt3sas_base_hard_reset_handler(ioc, CAN_SLEEP,
FORCE_BIG_HAMMER);
@ -235,7 +254,8 @@ mpt3sas_base_start_watchdog(struct MPT3SAS_ADAPTER *ioc)
INIT_DELAYED_WORK(&ioc->fault_reset_work, _base_fault_reset_work);
snprintf(ioc->fault_reset_work_q_name,
sizeof(ioc->fault_reset_work_q_name), "poll_%d_status", ioc->id);
sizeof(ioc->fault_reset_work_q_name), "poll_%s%d_status",
ioc->driver_name, ioc->id);
ioc->fault_reset_work_q =
create_singlethread_workqueue(ioc->fault_reset_work_q_name);
if (!ioc->fault_reset_work_q) {
@ -324,7 +344,6 @@ mpt3sas_halt_firmware(struct MPT3SAS_ADAPTER *ioc)
panic("panic in %s\n", __func__);
}
#ifdef CONFIG_SCSI_MPT3SAS_LOGGING
/**
* _base_sas_ioc_info - verbose translation of the ioc status
* @ioc: per adapter object
@ -578,7 +597,8 @@ _base_display_event_data(struct MPT3SAS_ADAPTER *ioc,
desc = "Device Status Change";
break;
case MPI2_EVENT_IR_OPERATION_STATUS:
desc = "IR Operation Status";
if (!ioc->hide_ir_msg)
desc = "IR Operation Status";
break;
case MPI2_EVENT_SAS_DISCOVERY:
{
@ -609,16 +629,20 @@ _base_display_event_data(struct MPT3SAS_ADAPTER *ioc,
desc = "SAS Enclosure Device Status Change";
break;
case MPI2_EVENT_IR_VOLUME:
desc = "IR Volume";
if (!ioc->hide_ir_msg)
desc = "IR Volume";
break;
case MPI2_EVENT_IR_PHYSICAL_DISK:
desc = "IR Physical Disk";
if (!ioc->hide_ir_msg)
desc = "IR Physical Disk";
break;
case MPI2_EVENT_IR_CONFIGURATION_CHANGE_LIST:
desc = "IR Configuration Change List";
if (!ioc->hide_ir_msg)
desc = "IR Configuration Change List";
break;
case MPI2_EVENT_LOG_ENTRY_ADDED:
desc = "Log Entry Added";
if (!ioc->hide_ir_msg)
desc = "Log Entry Added";
break;
case MPI2_EVENT_TEMP_THRESHOLD:
desc = "Temperature Threshold";
@ -630,7 +654,6 @@ _base_display_event_data(struct MPT3SAS_ADAPTER *ioc,
pr_info(MPT3SAS_FMT "%s\n", ioc->name, desc);
}
#endif
/**
* _base_sas_log_info - verbose translation of firmware log info
@ -675,7 +698,10 @@ _base_sas_log_info(struct MPT3SAS_ADAPTER *ioc , u32 log_info)
originator_str = "PL";
break;
case 2:
originator_str = "IR";
if (!ioc->hide_ir_msg)
originator_str = "IR";
else
originator_str = "WarpDrive";
break;
}
@ -710,13 +736,13 @@ _base_display_reply_info(struct MPT3SAS_ADAPTER *ioc, u16 smid, u8 msix_index,
return;
}
ioc_status = le16_to_cpu(mpi_reply->IOCStatus);
#ifdef CONFIG_SCSI_MPT3SAS_LOGGING
if ((ioc_status & MPI2_IOCSTATUS_MASK) &&
(ioc->logging_level & MPT_DEBUG_REPLY)) {
_base_sas_ioc_info(ioc , mpi_reply,
mpt3sas_base_get_msg_frame(ioc, smid));
}
#endif
if (ioc_status & MPI2_IOCSTATUS_FLAG_LOG_INFO_AVAILABLE) {
loginfo = le32_to_cpu(mpi_reply->IOCLogInfo);
_base_sas_log_info(ioc, loginfo);
@ -783,9 +809,9 @@ _base_async_event(struct MPT3SAS_ADAPTER *ioc, u8 msix_index, u32 reply)
return 1;
if (mpi_reply->Function != MPI2_FUNCTION_EVENT_NOTIFICATION)
return 1;
#ifdef CONFIG_SCSI_MPT3SAS_LOGGING
_base_display_event_data(ioc, mpi_reply);
#endif
if (!(mpi_reply->AckRequired & MPI2_EVENT_NOTIFICATION_ACK_REQUIRED))
goto out;
smid = mpt3sas_base_get_smid(ioc, ioc->base_cb_idx);
@ -1009,6 +1035,12 @@ _base_interrupt(int irq, void *bus_id)
}
wmb();
if (ioc->is_warpdrive) {
writel(reply_q->reply_post_host_index,
ioc->reply_post_host_index[msix_index]);
atomic_dec(&reply_q->busy);
return IRQ_HANDLED;
}
/* Update Reply Post Host Index.
* For those HBA's which support combined reply queue feature
@ -1319,6 +1351,149 @@ _base_build_zero_len_sge_ieee(struct MPT3SAS_ADAPTER *ioc, void *paddr)
_base_add_sg_single_ieee(paddr, sgl_flags, 0, 0, -1);
}
/**
* _base_build_sg_scmd - main sg creation routine
* @ioc: per adapter object
* @scmd: scsi command
* @smid: system request message index
* Context: none.
*
* The main routine that builds scatter gather table from a given
* scsi request sent via the .queuecommand main handler.
*
* Returns 0 success, anything else error
*/
static int
_base_build_sg_scmd(struct MPT3SAS_ADAPTER *ioc,
struct scsi_cmnd *scmd, u16 smid)
{
Mpi2SCSIIORequest_t *mpi_request;
dma_addr_t chain_dma;
struct scatterlist *sg_scmd;
void *sg_local, *chain;
u32 chain_offset;
u32 chain_length;
u32 chain_flags;
int sges_left;
u32 sges_in_segment;
u32 sgl_flags;
u32 sgl_flags_last_element;
u32 sgl_flags_end_buffer;
struct chain_tracker *chain_req;
mpi_request = mpt3sas_base_get_msg_frame(ioc, smid);
/* init scatter gather flags */
sgl_flags = MPI2_SGE_FLAGS_SIMPLE_ELEMENT;
if (scmd->sc_data_direction == DMA_TO_DEVICE)
sgl_flags |= MPI2_SGE_FLAGS_HOST_TO_IOC;
sgl_flags_last_element = (sgl_flags | MPI2_SGE_FLAGS_LAST_ELEMENT)
<< MPI2_SGE_FLAGS_SHIFT;
sgl_flags_end_buffer = (sgl_flags | MPI2_SGE_FLAGS_LAST_ELEMENT |
MPI2_SGE_FLAGS_END_OF_BUFFER | MPI2_SGE_FLAGS_END_OF_LIST)
<< MPI2_SGE_FLAGS_SHIFT;
sgl_flags = sgl_flags << MPI2_SGE_FLAGS_SHIFT;
sg_scmd = scsi_sglist(scmd);
sges_left = scsi_dma_map(scmd);
if (sges_left < 0) {
sdev_printk(KERN_ERR, scmd->device,
"pci_map_sg failed: request for %d bytes!\n",
scsi_bufflen(scmd));
return -ENOMEM;
}
sg_local = &mpi_request->SGL;
sges_in_segment = ioc->max_sges_in_main_message;
if (sges_left <= sges_in_segment)
goto fill_in_last_segment;
mpi_request->ChainOffset = (offsetof(Mpi2SCSIIORequest_t, SGL) +
(sges_in_segment * ioc->sge_size))/4;
/* fill in main message segment when there is a chain following */
while (sges_in_segment) {
if (sges_in_segment == 1)
ioc->base_add_sg_single(sg_local,
sgl_flags_last_element | sg_dma_len(sg_scmd),
sg_dma_address(sg_scmd));
else
ioc->base_add_sg_single(sg_local, sgl_flags |
sg_dma_len(sg_scmd), sg_dma_address(sg_scmd));
sg_scmd = sg_next(sg_scmd);
sg_local += ioc->sge_size;
sges_left--;
sges_in_segment--;
}
/* initializing the chain flags and pointers */
chain_flags = MPI2_SGE_FLAGS_CHAIN_ELEMENT << MPI2_SGE_FLAGS_SHIFT;
chain_req = _base_get_chain_buffer_tracker(ioc, smid);
if (!chain_req)
return -1;
chain = chain_req->chain_buffer;
chain_dma = chain_req->chain_buffer_dma;
do {
sges_in_segment = (sges_left <=
ioc->max_sges_in_chain_message) ? sges_left :
ioc->max_sges_in_chain_message;
chain_offset = (sges_left == sges_in_segment) ?
0 : (sges_in_segment * ioc->sge_size)/4;
chain_length = sges_in_segment * ioc->sge_size;
if (chain_offset) {
chain_offset = chain_offset <<
MPI2_SGE_CHAIN_OFFSET_SHIFT;
chain_length += ioc->sge_size;
}
ioc->base_add_sg_single(sg_local, chain_flags | chain_offset |
chain_length, chain_dma);
sg_local = chain;
if (!chain_offset)
goto fill_in_last_segment;
/* fill in chain segments */
while (sges_in_segment) {
if (sges_in_segment == 1)
ioc->base_add_sg_single(sg_local,
sgl_flags_last_element |
sg_dma_len(sg_scmd),
sg_dma_address(sg_scmd));
else
ioc->base_add_sg_single(sg_local, sgl_flags |
sg_dma_len(sg_scmd),
sg_dma_address(sg_scmd));
sg_scmd = sg_next(sg_scmd);
sg_local += ioc->sge_size;
sges_left--;
sges_in_segment--;
}
chain_req = _base_get_chain_buffer_tracker(ioc, smid);
if (!chain_req)
return -1;
chain = chain_req->chain_buffer;
chain_dma = chain_req->chain_buffer_dma;
} while (1);
fill_in_last_segment:
/* fill the last segment */
while (sges_left) {
if (sges_left == 1)
ioc->base_add_sg_single(sg_local, sgl_flags_end_buffer |
sg_dma_len(sg_scmd), sg_dma_address(sg_scmd));
else
ioc->base_add_sg_single(sg_local, sgl_flags |
sg_dma_len(sg_scmd), sg_dma_address(sg_scmd));
sg_scmd = sg_next(sg_scmd);
sg_local += ioc->sge_size;
sges_left--;
}
return 0;
}
/**
* _base_build_sg_scmd_ieee - main sg creation routine for IEEE format
* @ioc: per adapter object
@ -1571,6 +1746,14 @@ _base_check_enable_msix(struct MPT3SAS_ADAPTER *ioc)
int base;
u16 message_control;
/* Check whether controller SAS2008 B0 controller,
* if it is SAS2008 B0 controller use IO-APIC instead of MSIX
*/
if (ioc->pdev->device == MPI2_MFGPAGE_DEVID_SAS2008 &&
ioc->pdev->revision == SAS2_PCI_DEVICE_B0_REVISION) {
return -EINVAL;
}
base = pci_find_capability(ioc->pdev, PCI_CAP_ID_MSIX);
if (!base) {
dfailprintk(ioc, pr_info(MPT3SAS_FMT "msix not supported\n",
@ -1579,9 +1762,19 @@ _base_check_enable_msix(struct MPT3SAS_ADAPTER *ioc)
}
/* get msix vector count */
pci_read_config_word(ioc->pdev, base + 2, &message_control);
ioc->msix_vector_count = (message_control & 0x3FF) + 1;
/* NUMA_IO not supported for older controllers */
if (ioc->pdev->device == MPI2_MFGPAGE_DEVID_SAS2004 ||
ioc->pdev->device == MPI2_MFGPAGE_DEVID_SAS2008 ||
ioc->pdev->device == MPI2_MFGPAGE_DEVID_SAS2108_1 ||
ioc->pdev->device == MPI2_MFGPAGE_DEVID_SAS2108_2 ||
ioc->pdev->device == MPI2_MFGPAGE_DEVID_SAS2108_3 ||
ioc->pdev->device == MPI2_MFGPAGE_DEVID_SAS2116_1 ||
ioc->pdev->device == MPI2_MFGPAGE_DEVID_SAS2116_2)
ioc->msix_vector_count = 1;
else {
pci_read_config_word(ioc->pdev, base + 2, &message_control);
ioc->msix_vector_count = (message_control & 0x3FF) + 1;
}
dinitprintk(ioc, pr_info(MPT3SAS_FMT
"msix is supported, vector_count(%d)\n",
ioc->name, ioc->msix_vector_count));
@ -1643,10 +1836,10 @@ _base_request_irq(struct MPT3SAS_ADAPTER *ioc, u8 index, u32 vector)
atomic_set(&reply_q->busy, 0);
if (ioc->msix_enable)
snprintf(reply_q->name, MPT_NAME_LENGTH, "%s%d-msix%d",
MPT3SAS_DRIVER_NAME, ioc->id, index);
ioc->driver_name, ioc->id, index);
else
snprintf(reply_q->name, MPT_NAME_LENGTH, "%s%d",
MPT3SAS_DRIVER_NAME, ioc->id);
ioc->driver_name, ioc->id);
r = request_irq(vector, _base_interrupt, IRQF_SHARED, reply_q->name,
reply_q);
if (r) {
@ -1872,7 +2065,7 @@ mpt3sas_base_map_resources(struct MPT3SAS_ADAPTER *ioc)
if (pci_request_selected_regions(pdev, ioc->bars,
MPT3SAS_DRIVER_NAME)) {
ioc->driver_name)) {
pr_warn(MPT3SAS_FMT "pci_request_selected_regions: failed\n",
ioc->name);
ioc->bars = 0;
@ -2158,6 +2351,7 @@ mpt3sas_base_free_smid(struct MPT3SAS_ADAPTER *ioc, u16 smid)
}
ioc->scsi_lookup[i].cb_idx = 0xFF;
ioc->scsi_lookup[i].scmd = NULL;
ioc->scsi_lookup[i].direct_io = 0;
list_add(&ioc->scsi_lookup[i].tracker_list, &ioc->free_list);
spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags);
@ -2318,143 +2512,261 @@ mpt3sas_base_put_smid_default(struct MPT3SAS_ADAPTER *ioc, u16 smid)
}
/**
* _base_display_intel_branding - Display branding string
* _base_display_OEMs_branding - Display branding string
* @ioc: per adapter object
*
* Return nothing.
*/
static void
_base_display_intel_branding(struct MPT3SAS_ADAPTER *ioc)
_base_display_OEMs_branding(struct MPT3SAS_ADAPTER *ioc)
{
if (ioc->pdev->subsystem_vendor != PCI_VENDOR_ID_INTEL)
return;
switch (ioc->pdev->device) {
case MPI25_MFGPAGE_DEVID_SAS3008:
switch (ioc->pdev->subsystem_device) {
case MPT3SAS_INTEL_RMS3JC080_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT3SAS_INTEL_RMS3JC080_BRANDING);
break;
switch (ioc->pdev->subsystem_vendor) {
case PCI_VENDOR_ID_INTEL:
switch (ioc->pdev->device) {
case MPI2_MFGPAGE_DEVID_SAS2008:
switch (ioc->pdev->subsystem_device) {
case MPT2SAS_INTEL_RMS2LL080_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT2SAS_INTEL_RMS2LL080_BRANDING);
break;
case MPT2SAS_INTEL_RMS2LL040_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT2SAS_INTEL_RMS2LL040_BRANDING);
break;
case MPT2SAS_INTEL_SSD910_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT2SAS_INTEL_SSD910_BRANDING);
break;
default:
pr_info(MPT3SAS_FMT
"Intel(R) Controller: Subsystem ID: 0x%X\n",
ioc->name, ioc->pdev->subsystem_device);
break;
}
case MPI2_MFGPAGE_DEVID_SAS2308_2:
switch (ioc->pdev->subsystem_device) {
case MPT2SAS_INTEL_RS25GB008_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT2SAS_INTEL_RS25GB008_BRANDING);
break;
case MPT2SAS_INTEL_RMS25JB080_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT2SAS_INTEL_RMS25JB080_BRANDING);
break;
case MPT2SAS_INTEL_RMS25JB040_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT2SAS_INTEL_RMS25JB040_BRANDING);
break;
case MPT2SAS_INTEL_RMS25KB080_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT2SAS_INTEL_RMS25KB080_BRANDING);
break;
case MPT2SAS_INTEL_RMS25KB040_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT2SAS_INTEL_RMS25KB040_BRANDING);
break;
case MPT2SAS_INTEL_RMS25LB040_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT2SAS_INTEL_RMS25LB040_BRANDING);
break;
case MPT2SAS_INTEL_RMS25LB080_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT2SAS_INTEL_RMS25LB080_BRANDING);
break;
default:
pr_info(MPT3SAS_FMT
"Intel(R) Controller: Subsystem ID: 0x%X\n",
ioc->name, ioc->pdev->subsystem_device);
break;
}
case MPI25_MFGPAGE_DEVID_SAS3008:
switch (ioc->pdev->subsystem_device) {
case MPT3SAS_INTEL_RMS3JC080_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT3SAS_INTEL_RMS3JC080_BRANDING);
break;
case MPT3SAS_INTEL_RS3GC008_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT3SAS_INTEL_RS3GC008_BRANDING);
break;
case MPT3SAS_INTEL_RS3FC044_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT3SAS_INTEL_RS3FC044_BRANDING);
break;
case MPT3SAS_INTEL_RS3UC080_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT3SAS_INTEL_RS3UC080_BRANDING);
case MPT3SAS_INTEL_RS3GC008_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT3SAS_INTEL_RS3GC008_BRANDING);
break;
case MPT3SAS_INTEL_RS3FC044_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT3SAS_INTEL_RS3FC044_BRANDING);
break;
case MPT3SAS_INTEL_RS3UC080_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT3SAS_INTEL_RS3UC080_BRANDING);
break;
default:
pr_info(MPT3SAS_FMT
"Intel(R) Controller: Subsystem ID: 0x%X\n",
ioc->name, ioc->pdev->subsystem_device);
break;
}
break;
default:
pr_info(MPT3SAS_FMT
"Intel(R) Controller: Subsystem ID: 0x%X\n",
ioc->name, ioc->pdev->subsystem_device);
break;
}
break;
default:
pr_info(MPT3SAS_FMT
"Intel(R) Controller: Subsystem ID: 0x%X\n",
ioc->name, ioc->pdev->subsystem_device);
break;
}
}
/**
* _base_display_dell_branding - Display branding string
* @ioc: per adapter object
*
* Return nothing.
*/
static void
_base_display_dell_branding(struct MPT3SAS_ADAPTER *ioc)
{
if (ioc->pdev->subsystem_vendor != PCI_VENDOR_ID_DELL)
return;
switch (ioc->pdev->device) {
case MPI25_MFGPAGE_DEVID_SAS3008:
switch (ioc->pdev->subsystem_device) {
case MPT3SAS_DELL_12G_HBA_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT3SAS_DELL_12G_HBA_BRANDING);
break;
default:
pr_info(MPT3SAS_FMT
"Dell 12Gbps HBA: Subsystem ID: 0x%X\n", ioc->name,
ioc->pdev->subsystem_device);
break;
}
break;
default:
pr_info(MPT3SAS_FMT
"Dell 12Gbps HBA: Subsystem ID: 0x%X\n", ioc->name,
ioc->pdev->subsystem_device);
break;
}
}
/**
* _base_display_cisco_branding - Display branding string
* @ioc: per adapter object
*
* Return nothing.
*/
static void
_base_display_cisco_branding(struct MPT3SAS_ADAPTER *ioc)
{
if (ioc->pdev->subsystem_vendor != PCI_VENDOR_ID_CISCO)
return;
switch (ioc->pdev->device) {
case MPI25_MFGPAGE_DEVID_SAS3008:
switch (ioc->pdev->subsystem_device) {
case MPT3SAS_CISCO_12G_8E_HBA_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT3SAS_CISCO_12G_8E_HBA_BRANDING);
break;
case MPT3SAS_CISCO_12G_8I_HBA_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT3SAS_CISCO_12G_8I_HBA_BRANDING);
break;
case MPT3SAS_CISCO_12G_AVILA_HBA_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT3SAS_CISCO_12G_AVILA_HBA_BRANDING);
break;
default:
pr_info(MPT3SAS_FMT
"Cisco 12Gbps SAS HBA: Subsystem ID: 0x%X\n",
ioc->name, ioc->pdev->subsystem_device);
break;
}
break;
case MPI25_MFGPAGE_DEVID_SAS3108_1:
switch (ioc->pdev->subsystem_device) {
case MPT3SAS_CISCO_12G_AVILA_HBA_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT3SAS_CISCO_12G_AVILA_HBA_BRANDING);
break;
case MPT3SAS_CISCO_12G_COLUSA_MEZZANINE_HBA_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT3SAS_CISCO_12G_COLUSA_MEZZANINE_HBA_BRANDING);
break;
default:
pr_info(MPT3SAS_FMT
"Cisco 12Gbps SAS HBA: Subsystem ID: 0x%X\n",
"Intel(R) Controller: Subsystem ID: 0x%X\n",
ioc->name, ioc->pdev->subsystem_device);
break;
}
break;
case PCI_VENDOR_ID_DELL:
switch (ioc->pdev->device) {
case MPI2_MFGPAGE_DEVID_SAS2008:
switch (ioc->pdev->subsystem_device) {
case MPT2SAS_DELL_6GBPS_SAS_HBA_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT2SAS_DELL_6GBPS_SAS_HBA_BRANDING);
break;
case MPT2SAS_DELL_PERC_H200_ADAPTER_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT2SAS_DELL_PERC_H200_ADAPTER_BRANDING);
break;
case MPT2SAS_DELL_PERC_H200_INTEGRATED_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT2SAS_DELL_PERC_H200_INTEGRATED_BRANDING);
break;
case MPT2SAS_DELL_PERC_H200_MODULAR_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT2SAS_DELL_PERC_H200_MODULAR_BRANDING);
break;
case MPT2SAS_DELL_PERC_H200_EMBEDDED_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT2SAS_DELL_PERC_H200_EMBEDDED_BRANDING);
break;
case MPT2SAS_DELL_PERC_H200_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT2SAS_DELL_PERC_H200_BRANDING);
break;
case MPT2SAS_DELL_6GBPS_SAS_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT2SAS_DELL_6GBPS_SAS_BRANDING);
break;
default:
pr_info(MPT3SAS_FMT
"Dell 6Gbps HBA: Subsystem ID: 0x%X\n",
ioc->name, ioc->pdev->subsystem_device);
break;
}
break;
case MPI25_MFGPAGE_DEVID_SAS3008:
switch (ioc->pdev->subsystem_device) {
case MPT3SAS_DELL_12G_HBA_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT3SAS_DELL_12G_HBA_BRANDING);
break;
default:
pr_info(MPT3SAS_FMT
"Dell 12Gbps HBA: Subsystem ID: 0x%X\n",
ioc->name, ioc->pdev->subsystem_device);
break;
}
break;
default:
pr_info(MPT3SAS_FMT
"Dell HBA: Subsystem ID: 0x%X\n", ioc->name,
ioc->pdev->subsystem_device);
break;
}
break;
case PCI_VENDOR_ID_CISCO:
switch (ioc->pdev->device) {
case MPI25_MFGPAGE_DEVID_SAS3008:
switch (ioc->pdev->subsystem_device) {
case MPT3SAS_CISCO_12G_8E_HBA_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT3SAS_CISCO_12G_8E_HBA_BRANDING);
break;
case MPT3SAS_CISCO_12G_8I_HBA_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT3SAS_CISCO_12G_8I_HBA_BRANDING);
break;
case MPT3SAS_CISCO_12G_AVILA_HBA_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT3SAS_CISCO_12G_AVILA_HBA_BRANDING);
break;
default:
pr_info(MPT3SAS_FMT
"Cisco 12Gbps SAS HBA: Subsystem ID: 0x%X\n",
ioc->name, ioc->pdev->subsystem_device);
break;
}
break;
case MPI25_MFGPAGE_DEVID_SAS3108_1:
switch (ioc->pdev->subsystem_device) {
case MPT3SAS_CISCO_12G_AVILA_HBA_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT3SAS_CISCO_12G_AVILA_HBA_BRANDING);
break;
case MPT3SAS_CISCO_12G_COLUSA_MEZZANINE_HBA_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT3SAS_CISCO_12G_COLUSA_MEZZANINE_HBA_BRANDING
);
break;
default:
pr_info(MPT3SAS_FMT
"Cisco 12Gbps SAS HBA: Subsystem ID: 0x%X\n",
ioc->name, ioc->pdev->subsystem_device);
break;
}
break;
default:
pr_info(MPT3SAS_FMT
"Cisco SAS HBA: Subsystem ID: 0x%X\n",
ioc->name, ioc->pdev->subsystem_device);
break;
}
break;
case MPT2SAS_HP_3PAR_SSVID:
switch (ioc->pdev->device) {
case MPI2_MFGPAGE_DEVID_SAS2004:
switch (ioc->pdev->subsystem_device) {
case MPT2SAS_HP_DAUGHTER_2_4_INTERNAL_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT2SAS_HP_DAUGHTER_2_4_INTERNAL_BRANDING);
break;
default:
pr_info(MPT3SAS_FMT
"HP 6Gbps SAS HBA: Subsystem ID: 0x%X\n",
ioc->name, ioc->pdev->subsystem_device);
break;
}
case MPI2_MFGPAGE_DEVID_SAS2308_2:
switch (ioc->pdev->subsystem_device) {
case MPT2SAS_HP_2_4_INTERNAL_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT2SAS_HP_2_4_INTERNAL_BRANDING);
break;
case MPT2SAS_HP_2_4_EXTERNAL_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT2SAS_HP_2_4_EXTERNAL_BRANDING);
break;
case MPT2SAS_HP_1_4_INTERNAL_1_4_EXTERNAL_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT2SAS_HP_1_4_INTERNAL_1_4_EXTERNAL_BRANDING);
break;
case MPT2SAS_HP_EMBEDDED_2_4_INTERNAL_SSDID:
pr_info(MPT3SAS_FMT "%s\n", ioc->name,
MPT2SAS_HP_EMBEDDED_2_4_INTERNAL_BRANDING);
break;
default:
pr_info(MPT3SAS_FMT
"HP 6Gbps SAS HBA: Subsystem ID: 0x%X\n",
ioc->name, ioc->pdev->subsystem_device);
break;
}
default:
pr_info(MPT3SAS_FMT
"HP SAS HBA: Subsystem ID: 0x%X\n",
ioc->name, ioc->pdev->subsystem_device);
break;
}
default:
pr_info(MPT3SAS_FMT
"Cisco 12Gbps SAS HBA: Subsystem ID: 0x%X\n",
ioc->name, ioc->pdev->subsystem_device);
break;
}
}
@ -2488,9 +2800,7 @@ _base_display_ioc_capabilities(struct MPT3SAS_ADAPTER *ioc)
(bios_version & 0x0000FF00) >> 8,
bios_version & 0x000000FF);
_base_display_intel_branding(ioc);
_base_display_dell_branding(ioc);
_base_display_cisco_branding(ioc);
_base_display_OEMs_branding(ioc);
pr_info(MPT3SAS_FMT "Protocol=(", ioc->name);
@ -2508,10 +2818,12 @@ _base_display_ioc_capabilities(struct MPT3SAS_ADAPTER *ioc)
pr_info("), ");
pr_info("Capabilities=(");
if (ioc->facts.IOCCapabilities &
if (!ioc->hide_ir_msg) {
if (ioc->facts.IOCCapabilities &
MPI2_IOCFACTS_CAPABILITY_INTEGRATED_RAID) {
pr_info("Raid");
i++;
}
}
if (ioc->facts.IOCCapabilities & MPI2_IOCFACTS_CAPABILITY_TLR) {
@ -2852,18 +3164,22 @@ _base_allocate_memory_pools(struct MPT3SAS_ADAPTER *ioc, int sleep_flag)
/* command line tunables for max sgl entries */
if (max_sgl_entries != -1)
sg_tablesize = max_sgl_entries;
else
sg_tablesize = MPT3SAS_SG_DEPTH;
else {
if (ioc->hba_mpi_version_belonged == MPI2_VERSION)
sg_tablesize = MPT2SAS_SG_DEPTH;
else
sg_tablesize = MPT3SAS_SG_DEPTH;
}
if (sg_tablesize < MPT3SAS_MIN_PHYS_SEGMENTS)
sg_tablesize = MPT3SAS_MIN_PHYS_SEGMENTS;
else if (sg_tablesize > MPT3SAS_MAX_PHYS_SEGMENTS) {
if (sg_tablesize < MPT_MIN_PHYS_SEGMENTS)
sg_tablesize = MPT_MIN_PHYS_SEGMENTS;
else if (sg_tablesize > MPT_MAX_PHYS_SEGMENTS) {
sg_tablesize = min_t(unsigned short, sg_tablesize,
SCSI_MAX_SG_CHAIN_SEGMENTS);
pr_warn(MPT3SAS_FMT
"sg_tablesize(%u) is bigger than kernel"
" defined SCSI_MAX_SG_SEGMENTS(%u)\n", ioc->name,
sg_tablesize, MPT3SAS_MAX_PHYS_SEGMENTS);
sg_tablesize, MPT_MAX_PHYS_SEGMENTS);
}
ioc->shost->sg_tablesize = sg_tablesize;
@ -4021,7 +4337,7 @@ _base_send_ioc_init(struct MPT3SAS_ADAPTER *ioc, int sleep_flag)
mpi_request.WhoInit = MPI2_WHOINIT_HOST_DRIVER;
mpi_request.VF_ID = 0; /* TODO */
mpi_request.VP_ID = 0;
mpi_request.MsgVersion = cpu_to_le16(MPI25_VERSION);
mpi_request.MsgVersion = cpu_to_le16(ioc->hba_mpi_version_belonged);
mpi_request.HeaderVersion = cpu_to_le16(MPI2_HEADER_VERSION);
if (_base_is_controller_msix_enabled(ioc))
@ -4655,6 +4971,7 @@ _base_make_ioc_operational(struct MPT3SAS_ADAPTER *ioc, int sleep_flag)
u32 reply_address;
u16 smid;
struct _tr_list *delayed_tr, *delayed_tr_next;
u8 hide_flag;
struct adapter_reply_queue *reply_q;
long reply_post_free;
u32 reply_post_free_sz, index = 0;
@ -4685,6 +5002,7 @@ _base_make_ioc_operational(struct MPT3SAS_ADAPTER *ioc, int sleep_flag)
ioc->scsi_lookup[i].cb_idx = 0xFF;
ioc->scsi_lookup[i].smid = smid;
ioc->scsi_lookup[i].scmd = NULL;
ioc->scsi_lookup[i].direct_io = 0;
list_add_tail(&ioc->scsi_lookup[i].tracker_list,
&ioc->free_list);
}
@ -4787,6 +5105,16 @@ _base_make_ioc_operational(struct MPT3SAS_ADAPTER *ioc, int sleep_flag)
if (ioc->is_driver_loading) {
if (ioc->is_warpdrive && ioc->manu_pg10.OEMIdentifier
== 0x80) {
hide_flag = (u8) (
le32_to_cpu(ioc->manu_pg10.OEMSpecificFlags0) &
MFG_PAGE10_HIDE_SSDS_MASK);
if (hide_flag != MFG_PAGE10_HIDE_SSDS_MASK)
ioc->mfg_pg10_hide_flag = hide_flag;
}
ioc->wait_for_discovery_to_complete =
_base_determine_wait_on_discovery(ioc);
@ -4812,6 +5140,8 @@ mpt3sas_base_free_resources(struct MPT3SAS_ADAPTER *ioc)
dexitprintk(ioc, pr_info(MPT3SAS_FMT "%s\n", ioc->name,
__func__));
/* synchronizing freeing resource with pci_access_mutex lock */
mutex_lock(&ioc->pci_access_mutex);
if (ioc->chip_phys && ioc->chip) {
_base_mask_interrupts(ioc);
ioc->shost_recovery = 1;
@ -4820,6 +5150,7 @@ mpt3sas_base_free_resources(struct MPT3SAS_ADAPTER *ioc)
}
mpt3sas_base_unmap_resources(ioc);
mutex_unlock(&ioc->pci_access_mutex);
return;
}
@ -4834,7 +5165,6 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc)
{
int r, i;
int cpu_id, last_cpu_id = 0;
u8 revision;
dinitprintk(ioc, pr_info(MPT3SAS_FMT "%s\n", ioc->name,
__func__));
@ -4854,19 +5184,16 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc)
goto out_free_resources;
}
/* Check whether the controller revision is C0 or above.
* only C0 and above revision controllers support 96 MSI-X vectors.
*/
revision = ioc->pdev->revision;
if ((ioc->pdev->device == MPI25_MFGPAGE_DEVID_SAS3004 ||
ioc->pdev->device == MPI25_MFGPAGE_DEVID_SAS3008 ||
ioc->pdev->device == MPI25_MFGPAGE_DEVID_SAS3108_1 ||
ioc->pdev->device == MPI25_MFGPAGE_DEVID_SAS3108_2 ||
ioc->pdev->device == MPI25_MFGPAGE_DEVID_SAS3108_5 ||
ioc->pdev->device == MPI25_MFGPAGE_DEVID_SAS3108_6) &&
(revision >= 0x02))
ioc->msix96_vector = 1;
if (ioc->is_warpdrive) {
ioc->reply_post_host_index = kcalloc(ioc->cpu_msix_table_sz,
sizeof(resource_size_t *), GFP_KERNEL);
if (!ioc->reply_post_host_index) {
dfailprintk(ioc, pr_info(MPT3SAS_FMT "allocation "
"for cpu_msix_table failed!!!\n", ioc->name));
r = -ENOMEM;
goto out_free_resources;
}
}
ioc->rdpq_array_enable_assigned = 0;
ioc->dma_mask = 0;
@ -4874,23 +5201,41 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc)
if (r)
goto out_free_resources;
if (ioc->is_warpdrive) {
ioc->reply_post_host_index[0] = (resource_size_t __iomem *)
&ioc->chip->ReplyPostHostIndex;
for (i = 1; i < ioc->cpu_msix_table_sz; i++)
ioc->reply_post_host_index[i] =
(resource_size_t __iomem *)
((u8 __iomem *)&ioc->chip->Doorbell + (0x4000 + ((i - 1)
* 4)));
}
pci_set_drvdata(ioc->pdev, ioc->shost);
r = _base_get_ioc_facts(ioc, CAN_SLEEP);
if (r)
goto out_free_resources;
/*
* In SAS3.0,
* SCSI_IO, SMP_PASSTHRU, SATA_PASSTHRU, Target Assist, and
* Target Status - all require the IEEE formated scatter gather
* elements.
*/
ioc->build_sg_scmd = &_base_build_sg_scmd_ieee;
ioc->build_sg = &_base_build_sg_ieee;
ioc->build_zero_len_sge = &_base_build_zero_len_sge_ieee;
ioc->sge_size_ieee = sizeof(Mpi2IeeeSgeSimple64_t);
switch (ioc->hba_mpi_version_belonged) {
case MPI2_VERSION:
ioc->build_sg_scmd = &_base_build_sg_scmd;
ioc->build_sg = &_base_build_sg;
ioc->build_zero_len_sge = &_base_build_zero_len_sge;
break;
case MPI25_VERSION:
/*
* In SAS3.0,
* SCSI_IO, SMP_PASSTHRU, SATA_PASSTHRU, Target Assist, and
* Target Status - all require the IEEE formated scatter gather
* elements.
*/
ioc->build_sg_scmd = &_base_build_sg_scmd_ieee;
ioc->build_sg = &_base_build_sg_ieee;
ioc->build_zero_len_sge = &_base_build_zero_len_sge_ieee;
ioc->sge_size_ieee = sizeof(Mpi2IeeeSgeSimple64_t);
break;
}
/*
* These function pointers for other requests that don't
@ -5006,6 +5351,7 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc)
if (r)
goto out_free_resources;
ioc->non_operational_loop = 0;
return 0;
out_free_resources:
@ -5016,6 +5362,8 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc)
_base_release_memory_pools(ioc);
pci_set_drvdata(ioc->pdev, NULL);
kfree(ioc->cpu_msix_table);
if (ioc->is_warpdrive)
kfree(ioc->reply_post_host_index);
kfree(ioc->pd_handles);
kfree(ioc->blocking_handles);
kfree(ioc->tm_cmds.reply);
@ -5055,6 +5403,8 @@ mpt3sas_base_detach(struct MPT3SAS_ADAPTER *ioc)
_base_release_memory_pools(ioc);
pci_set_drvdata(ioc->pdev, NULL);
kfree(ioc->cpu_msix_table);
if (ioc->is_warpdrive)
kfree(ioc->reply_post_host_index);
kfree(ioc->pd_handles);
kfree(ioc->blocking_handles);
kfree(ioc->pfacts);

View File

@ -63,6 +63,8 @@
#include <scsi/scsi_transport_sas.h>
#include <scsi/scsi_dbg.h>
#include <scsi/scsi_eh.h>
#include <linux/pci.h>
#include <linux/poll.h>
#include "mpt3sas_debug.h"
#include "mpt3sas_trigger_diag.h"
@ -71,23 +73,37 @@
#define MPT3SAS_DRIVER_NAME "mpt3sas"
#define MPT3SAS_AUTHOR "Avago Technologies <MPT-FusionLinux.pdl@avagotech.com>"
#define MPT3SAS_DESCRIPTION "LSI MPT Fusion SAS 3.0 Device Driver"
#define MPT3SAS_DRIVER_VERSION "09.100.00.00"
#define MPT3SAS_DRIVER_VERSION "09.102.00.00"
#define MPT3SAS_MAJOR_VERSION 9
#define MPT3SAS_MINOR_VERSION 100
#define MPT3SAS_MINOR_VERSION 102
#define MPT3SAS_BUILD_VERSION 0
#define MPT3SAS_RELEASE_VERSION 00
#define MPT2SAS_DRIVER_NAME "mpt2sas"
#define MPT2SAS_DESCRIPTION "LSI MPT Fusion SAS 2.0 Device Driver"
#define MPT2SAS_DRIVER_VERSION "20.102.00.00"
#define MPT2SAS_MAJOR_VERSION 20
#define MPT2SAS_MINOR_VERSION 102
#define MPT2SAS_BUILD_VERSION 0
#define MPT2SAS_RELEASE_VERSION 00
/*
* Set MPT3SAS_SG_DEPTH value based on user input.
*/
#define MPT3SAS_MAX_PHYS_SEGMENTS SCSI_MAX_SG_SEGMENTS
#define MPT3SAS_MIN_PHYS_SEGMENTS 16
#define MPT_MAX_PHYS_SEGMENTS SCSI_MAX_SG_SEGMENTS
#define MPT_MIN_PHYS_SEGMENTS 16
#ifdef CONFIG_SCSI_MPT3SAS_MAX_SGE
#define MPT3SAS_SG_DEPTH CONFIG_SCSI_MPT3SAS_MAX_SGE
#else
#define MPT3SAS_SG_DEPTH MPT3SAS_MAX_PHYS_SEGMENTS
#define MPT3SAS_SG_DEPTH MPT_MAX_PHYS_SEGMENTS
#endif
#ifdef CONFIG_SCSI_MPT2SAS_MAX_SGE
#define MPT2SAS_SG_DEPTH CONFIG_SCSI_MPT2SAS_MAX_SGE
#else
#define MPT2SAS_SG_DEPTH MPT_MAX_PHYS_SEGMENTS
#endif
/*
* Generic Defines
@ -123,6 +139,16 @@
*/
#define MPT3SAS_FMT "%s: "
/*
* WarpDrive Specific Log codes
*/
#define MPT2_WARPDRIVE_LOGENTRY (0x8002)
#define MPT2_WARPDRIVE_LC_SSDT (0x41)
#define MPT2_WARPDRIVE_LC_SSDLW (0x43)
#define MPT2_WARPDRIVE_LC_SSDLF (0x44)
#define MPT2_WARPDRIVE_LC_BRMF (0x4D)
/*
* per target private data
*/
@ -131,9 +157,33 @@
#define MPT_TARGET_FLAGS_DELETED 0x04
#define MPT_TARGET_FASTPATH_IO 0x08
#define SAS2_PCI_DEVICE_B0_REVISION (0x01)
#define SAS3_PCI_DEVICE_C0_REVISION (0x02)
/*
* Intel HBA branding
*/
#define MPT2SAS_INTEL_RMS25JB080_BRANDING \
"Intel(R) Integrated RAID Module RMS25JB080"
#define MPT2SAS_INTEL_RMS25JB040_BRANDING \
"Intel(R) Integrated RAID Module RMS25JB040"
#define MPT2SAS_INTEL_RMS25KB080_BRANDING \
"Intel(R) Integrated RAID Module RMS25KB080"
#define MPT2SAS_INTEL_RMS25KB040_BRANDING \
"Intel(R) Integrated RAID Module RMS25KB040"
#define MPT2SAS_INTEL_RMS25LB040_BRANDING \
"Intel(R) Integrated RAID Module RMS25LB040"
#define MPT2SAS_INTEL_RMS25LB080_BRANDING \
"Intel(R) Integrated RAID Module RMS25LB080"
#define MPT2SAS_INTEL_RMS2LL080_BRANDING \
"Intel Integrated RAID Module RMS2LL080"
#define MPT2SAS_INTEL_RMS2LL040_BRANDING \
"Intel Integrated RAID Module RMS2LL040"
#define MPT2SAS_INTEL_RS25GB008_BRANDING \
"Intel(R) RAID Controller RS25GB008"
#define MPT2SAS_INTEL_SSD910_BRANDING \
"Intel(R) SSD 910 Series"
#define MPT3SAS_INTEL_RMS3JC080_BRANDING \
"Intel(R) Integrated RAID Module RMS3JC080"
#define MPT3SAS_INTEL_RS3GC008_BRANDING \
@ -146,33 +196,62 @@
/*
* Intel HBA SSDIDs
*/
#define MPT3SAS_INTEL_RMS3JC080_SSDID 0x3521
#define MPT3SAS_INTEL_RS3GC008_SSDID 0x3522
#define MPT3SAS_INTEL_RS3FC044_SSDID 0x3523
#define MPT3SAS_INTEL_RS3UC080_SSDID 0x3524
#define MPT2SAS_INTEL_RMS25JB080_SSDID 0x3516
#define MPT2SAS_INTEL_RMS25JB040_SSDID 0x3517
#define MPT2SAS_INTEL_RMS25KB080_SSDID 0x3518
#define MPT2SAS_INTEL_RMS25KB040_SSDID 0x3519
#define MPT2SAS_INTEL_RMS25LB040_SSDID 0x351A
#define MPT2SAS_INTEL_RMS25LB080_SSDID 0x351B
#define MPT2SAS_INTEL_RMS2LL080_SSDID 0x350E
#define MPT2SAS_INTEL_RMS2LL040_SSDID 0x350F
#define MPT2SAS_INTEL_RS25GB008_SSDID 0x3000
#define MPT2SAS_INTEL_SSD910_SSDID 0x3700
#define MPT3SAS_INTEL_RMS3JC080_SSDID 0x3521
#define MPT3SAS_INTEL_RS3GC008_SSDID 0x3522
#define MPT3SAS_INTEL_RS3FC044_SSDID 0x3523
#define MPT3SAS_INTEL_RS3UC080_SSDID 0x3524
/*
* Dell HBA branding
*/
#define MPT2SAS_DELL_BRANDING_SIZE 32
#define MPT2SAS_DELL_6GBPS_SAS_HBA_BRANDING "Dell 6Gbps SAS HBA"
#define MPT2SAS_DELL_PERC_H200_ADAPTER_BRANDING "Dell PERC H200 Adapter"
#define MPT2SAS_DELL_PERC_H200_INTEGRATED_BRANDING "Dell PERC H200 Integrated"
#define MPT2SAS_DELL_PERC_H200_MODULAR_BRANDING "Dell PERC H200 Modular"
#define MPT2SAS_DELL_PERC_H200_EMBEDDED_BRANDING "Dell PERC H200 Embedded"
#define MPT2SAS_DELL_PERC_H200_BRANDING "Dell PERC H200"
#define MPT2SAS_DELL_6GBPS_SAS_BRANDING "Dell 6Gbps SAS"
#define MPT3SAS_DELL_12G_HBA_BRANDING \
"Dell 12Gbps HBA"
/*
* Dell HBA SSDIDs
*/
#define MPT3SAS_DELL_12G_HBA_SSDID 0x1F46
#define MPT2SAS_DELL_6GBPS_SAS_HBA_SSDID 0x1F1C
#define MPT2SAS_DELL_PERC_H200_ADAPTER_SSDID 0x1F1D
#define MPT2SAS_DELL_PERC_H200_INTEGRATED_SSDID 0x1F1E
#define MPT2SAS_DELL_PERC_H200_MODULAR_SSDID 0x1F1F
#define MPT2SAS_DELL_PERC_H200_EMBEDDED_SSDID 0x1F20
#define MPT2SAS_DELL_PERC_H200_SSDID 0x1F21
#define MPT2SAS_DELL_6GBPS_SAS_SSDID 0x1F22
#define MPT3SAS_DELL_12G_HBA_SSDID 0x1F46
/*
* Cisco HBA branding
*/
#define MPT3SAS_CISCO_12G_8E_HBA_BRANDING \
"Cisco 9300-8E 12G SAS HBA"
"Cisco 9300-8E 12G SAS HBA"
#define MPT3SAS_CISCO_12G_8I_HBA_BRANDING \
"Cisco 9300-8i 12G SAS HBA"
"Cisco 9300-8i 12G SAS HBA"
#define MPT3SAS_CISCO_12G_AVILA_HBA_BRANDING \
"Cisco 12G Modular SAS Pass through Controller"
"Cisco 12G Modular SAS Pass through Controller"
#define MPT3SAS_CISCO_12G_COLUSA_MEZZANINE_HBA_BRANDING \
"UCS C3X60 12G SAS Pass through Controller"
"UCS C3X60 12G SAS Pass through Controller"
/*
* Cisco HBA SSSDIDs
*/
@ -188,6 +267,31 @@
#define MPT3_DIAG_BUFFER_IS_RELEASED (0x02)
#define MPT3_DIAG_BUFFER_IS_DIAG_RESET (0x04)
/*
* HP HBA branding
*/
#define MPT2SAS_HP_3PAR_SSVID 0x1590
#define MPT2SAS_HP_2_4_INTERNAL_BRANDING \
"HP H220 Host Bus Adapter"
#define MPT2SAS_HP_2_4_EXTERNAL_BRANDING \
"HP H221 Host Bus Adapter"
#define MPT2SAS_HP_1_4_INTERNAL_1_4_EXTERNAL_BRANDING \
"HP H222 Host Bus Adapter"
#define MPT2SAS_HP_EMBEDDED_2_4_INTERNAL_BRANDING \
"HP H220i Host Bus Adapter"
#define MPT2SAS_HP_DAUGHTER_2_4_INTERNAL_BRANDING \
"HP H210i Host Bus Adapter"
/*
* HO HBA SSDIDs
*/
#define MPT2SAS_HP_2_4_INTERNAL_SSDID 0x0041
#define MPT2SAS_HP_2_4_EXTERNAL_SSDID 0x0042
#define MPT2SAS_HP_1_4_INTERNAL_1_4_EXTERNAL_SSDID 0x0043
#define MPT2SAS_HP_EMBEDDED_2_4_INTERNAL_SSDID 0x0044
#define MPT2SAS_HP_DAUGHTER_2_4_INTERNAL_SSDID 0x0046
/*
* Combined Reply Queue constants,
* There are twelve Supplemental Reply Post Host Index Registers
@ -243,20 +347,24 @@ struct Mpi2ManufacturingPage11_t {
* struct MPT3SAS_TARGET - starget private hostdata
* @starget: starget object
* @sas_address: target sas address
* @raid_device: raid_device pointer to access volume data
* @handle: device handle
* @num_luns: number luns
* @flags: MPT_TARGET_FLAGS_XXX flags
* @deleted: target flaged for deletion
* @tm_busy: target is busy with TM request.
* @sdev: The sas_device associated with this target
*/
struct MPT3SAS_TARGET {
struct scsi_target *starget;
u64 sas_address;
struct _raid_device *raid_device;
u16 handle;
int num_luns;
u32 flags;
u8 deleted;
u8 tm_busy;
struct _sas_device *sdev;
};
@ -266,6 +374,11 @@ struct MPT3SAS_TARGET {
#define MPT_DEVICE_FLAGS_INIT 0x01
#define MPT_DEVICE_TLR_ON 0x02
#define MFG_PAGE10_HIDE_SSDS_MASK (0x00000003)
#define MFG_PAGE10_HIDE_ALL_DISKS (0x00)
#define MFG_PAGE10_EXPOSE_ALL_DISKS (0x01)
#define MFG_PAGE10_HIDE_IF_VOL_PRESENT (0x02)
/**
* struct MPT3SAS_DEVICE - sdev private hostdata
* @sas_target: starget private hostdata
@ -358,8 +471,24 @@ struct _sas_device {
u8 pend_sas_rphy_add;
u8 enclosure_level;
u8 connector_name[4];
struct kref refcount;
};
static inline void sas_device_get(struct _sas_device *s)
{
kref_get(&s->refcount);
}
static inline void sas_device_free(struct kref *r)
{
kfree(container_of(r, struct _sas_device, refcount));
}
static inline void sas_device_put(struct _sas_device *s)
{
kref_put(&s->refcount, sas_device_free);
}
/**
* struct _raid_device - raid volume link list
* @list: sas device list
@ -367,6 +496,7 @@ struct _sas_device {
* @sdev: scsi device struct (volumes are single lun)
* @wwid: unique identifier for the volume
* @handle: device handle
* @block_size: Block size of the volume
* @id: target id
* @channel: target channel
* @volume_type: the raid level
@ -374,6 +504,13 @@ struct _sas_device {
* @num_pds: number of hidden raid components
* @responding: used in _scsih_raid_device_mark_responding
* @percent_complete: resync percent complete
* @direct_io_enabled: Whether direct io to PDs are allowed or not
* @stripe_exponent: X where 2powX is the stripe sz in blocks
* @block_exponent: X where 2powX is the block sz in bytes
* @max_lba: Maximum number of LBA in the volume
* @stripe_sz: Stripe Size of the volume
* @device_info: Device info of the volume member disk
* @pd_handle: Array of handles of the physical drives for direct I/O in le16
*/
#define MPT_MAX_WARPDRIVE_PDS 8
struct _raid_device {
@ -382,13 +519,20 @@ struct _raid_device {
struct scsi_device *sdev;
u64 wwid;
u16 handle;
u16 block_sz;
int id;
int channel;
u8 volume_type;
u8 num_pds;
u8 responding;
u8 percent_complete;
u8 direct_io_enabled;
u8 stripe_exponent;
u8 block_exponent;
u64 max_lba;
u32 stripe_sz;
u32 device_info;
u16 pd_handle[MPT_MAX_WARPDRIVE_PDS];
};
/**
@ -497,12 +641,14 @@ struct chain_tracker {
* @smid: system message id
* @scmd: scsi request pointer
* @cb_idx: callback index
* @direct_io: To indicate whether I/O is direct (WARPDRIVE)
* @tracker_list: list of free request (ioc->free_list)
*/
struct scsiio_tracker {
u16 smid;
struct scsi_cmnd *scmd;
u8 cb_idx;
u8 direct_io;
struct list_head chain_list;
struct list_head tracker_list;
};
@ -775,7 +921,13 @@ typedef void (*MPT3SAS_FLUSH_RUNNING_CMDS)(struct MPT3SAS_ADAPTER *ioc);
* @replyPostRegisterIndex: index of next position in Reply Desc Post Queue
* @delayed_tr_list: target reset link list
* @delayed_tr_volume_list: volume target reset link list
* @@temp_sensors_count: flag to carry the number of temperature sensors
* @temp_sensors_count: flag to carry the number of temperature sensors
* @pci_access_mutex: Mutex to synchronize ioctl,sysfs show path and
* pci resource handling. PCI resource freeing will lead to free
* vital hardware/memory resource, which might be in use by cli/sysfs
* path functions resulting in Null pointer reference followed by kernel
* crash. To avoid the above race condition we use mutex syncrhonization
* which ensures the syncrhonization between cli/sysfs_show path.
*/
struct MPT3SAS_ADAPTER {
struct list_head list;
@ -783,6 +935,7 @@ struct MPT3SAS_ADAPTER {
u8 id;
int cpu_count;
char name[MPT_NAME_LENGTH];
char driver_name[MPT_NAME_LENGTH];
char tmp_string[MPT_STRING_LENGTH];
struct pci_dev *pdev;
Mpi2SystemInterfaceRegs_t __iomem *chip;
@ -829,8 +982,10 @@ struct MPT3SAS_ADAPTER {
u16 msix_vector_count;
u8 *cpu_msix_table;
u16 cpu_msix_table_sz;
resource_size_t __iomem **reply_post_host_index;
u32 ioc_reset_count;
MPT3SAS_FLUSH_RUNNING_CMDS schedule_dead_ioc_flush_running_cmds;
u32 non_operational_loop;
/* internal commands, callback index */
u8 scsi_io_cb_idx;
@ -859,6 +1014,7 @@ struct MPT3SAS_ADAPTER {
MPT_BUILD_SG build_sg;
MPT_BUILD_ZERO_LEN_SGE build_zero_len_sge;
u16 sge_size_ieee;
u16 hba_mpi_version_belonged;
/* function ptr for MPI sg elements only */
MPT_BUILD_SG build_sg_mpi;
@ -987,6 +1143,7 @@ struct MPT3SAS_ADAPTER {
struct list_head delayed_tr_list;
struct list_head delayed_tr_volume_list;
u8 temp_sensors_count;
struct mutex pci_access_mutex;
/* diag buffer support */
u8 *diag_buffer[MPI2_DIAG_BUF_TYPE_COUNT];
@ -998,6 +1155,10 @@ struct MPT3SAS_ADAPTER {
u32 diagnostic_flags[MPI2_DIAG_BUF_TYPE_COUNT];
u32 ring_buffer_offset;
u32 ring_buffer_sz;
u8 is_warpdrive;
u8 hide_ir_msg;
u8 mfg_pg10_hide_flag;
u8 hide_drives;
spinlock_t diag_trigger_lock;
u8 diag_trigger_active;
struct SL_WH_MASTER_TRIGGER_T diag_trigger_master;
@ -1012,6 +1173,19 @@ typedef u8 (*MPT_CALLBACK)(struct MPT3SAS_ADAPTER *ioc, u16 smid, u8 msix_index,
/* base shared API */
extern struct list_head mpt3sas_ioc_list;
extern char driver_name[MPT_NAME_LENGTH];
/* spinlock on list operations over IOCs
* Case: when multiple warpdrive cards(IOCs) are in use
* Each IOC will added to the ioc list structure on initialization.
* Watchdog threads run at regular intervals to check IOC for any
* fault conditions which will trigger the dead_ioc thread to
* deallocate pci resource, resulting deleting the IOC netry from list,
* this deletion need to protected by spinlock to enusre that
* ioc removal is syncrhonized, if not synchronized it might lead to
* list_del corruption as the ioc list is traversed in cli path.
*/
extern spinlock_t gioc_lock;
void mpt3sas_base_start_watchdog(struct MPT3SAS_ADAPTER *ioc);
void mpt3sas_base_stop_watchdog(struct MPT3SAS_ADAPTER *ioc);
@ -1090,10 +1264,14 @@ struct _sas_node *mpt3sas_scsih_expander_find_by_handle(
struct MPT3SAS_ADAPTER *ioc, u16 handle);
struct _sas_node *mpt3sas_scsih_expander_find_by_sas_address(
struct MPT3SAS_ADAPTER *ioc, u64 sas_address);
struct _sas_device *mpt3sas_scsih_sas_device_find_by_sas_address(
struct MPT3SAS_ADAPTER *ioc, u64 sas_address);
struct _sas_device *mpt3sas_get_sdev_by_addr(
struct MPT3SAS_ADAPTER *ioc, u64 sas_address);
struct _sas_device *__mpt3sas_get_sdev_by_addr(
struct MPT3SAS_ADAPTER *ioc, u64 sas_address);
void mpt3sas_port_enable_complete(struct MPT3SAS_ADAPTER *ioc);
struct _raid_device *
mpt3sas_raid_device_find_by_handle(struct MPT3SAS_ADAPTER *ioc, u16 handle);
/* config shared API */
u8 mpt3sas_config_done(struct MPT3SAS_ADAPTER *ioc, u16 smid, u8 msix_index,
@ -1133,6 +1311,8 @@ int mpt3sas_config_get_sas_iounit_pg0(struct MPT3SAS_ADAPTER *ioc,
u16 sz);
int mpt3sas_config_get_iounit_pg1(struct MPT3SAS_ADAPTER *ioc, Mpi2ConfigReply_t
*mpi_reply, Mpi2IOUnitPage1_t *config_page);
int mpt3sas_config_get_iounit_pg3(struct MPT3SAS_ADAPTER *ioc,
Mpi2ConfigReply_t *mpi_reply, Mpi2IOUnitPage3_t *config_page, u16 sz);
int mpt3sas_config_set_iounit_pg1(struct MPT3SAS_ADAPTER *ioc, Mpi2ConfigReply_t
*mpi_reply, Mpi2IOUnitPage1_t *config_page);
int mpt3sas_config_get_iounit_pg8(struct MPT3SAS_ADAPTER *ioc, Mpi2ConfigReply_t
@ -1177,8 +1357,8 @@ int mpt3sas_config_get_volume_wwid(struct MPT3SAS_ADAPTER *ioc,
/* ctl shared API */
extern struct device_attribute *mpt3sas_host_attrs[];
extern struct device_attribute *mpt3sas_dev_attrs[];
void mpt3sas_ctl_init(void);
void mpt3sas_ctl_exit(void);
void mpt3sas_ctl_init(ushort hbas_to_enumerate);
void mpt3sas_ctl_exit(ushort hbas_to_enumerate);
u8 mpt3sas_ctl_done(struct MPT3SAS_ADAPTER *ioc, u16 smid, u8 msix_index,
u32 reply);
void mpt3sas_ctl_reset_handler(struct MPT3SAS_ADAPTER *ioc, int reset_phase);
@ -1193,6 +1373,7 @@ int mpt3sas_send_diag_release(struct MPT3SAS_ADAPTER *ioc, u8 buffer_type,
u8 *issue_reset);
/* transport shared API */
extern struct scsi_transport_template *mpt3sas_transport_template;
u8 mpt3sas_transport_done(struct MPT3SAS_ADAPTER *ioc, u16 smid, u8 msix_index,
u32 reply);
struct _sas_port *mpt3sas_transport_port_add(struct MPT3SAS_ADAPTER *ioc,
@ -1224,4 +1405,18 @@ void mpt3sas_trigger_scsi(struct MPT3SAS_ADAPTER *ioc, u8 sense_key,
u8 asc, u8 ascq);
void mpt3sas_trigger_mpi(struct MPT3SAS_ADAPTER *ioc, u16 ioc_status,
u32 loginfo);
/* warpdrive APIs */
u8 mpt3sas_get_num_volumes(struct MPT3SAS_ADAPTER *ioc);
void mpt3sas_init_warpdrive_properties(struct MPT3SAS_ADAPTER *ioc,
struct _raid_device *raid_device);
u8
mpt3sas_scsi_direct_io_get(struct MPT3SAS_ADAPTER *ioc, u16 smid);
void
mpt3sas_scsi_direct_io_set(struct MPT3SAS_ADAPTER *ioc, u16 smid, u8 direct_io);
void
mpt3sas_setup_direct_io(struct MPT3SAS_ADAPTER *ioc, struct scsi_cmnd *scmd,
struct _raid_device *raid_device, Mpi2SCSIIORequest_t *mpi_request,
u16 smid);
#endif /* MPT3SAS_BASE_H_INCLUDED */

View File

@ -83,7 +83,6 @@ struct config_request {
dma_addr_t page_dma;
};
#ifdef CONFIG_SCSI_MPT3SAS_LOGGING
/**
* _config_display_some_debug - debug routine
* @ioc: per adapter object
@ -173,7 +172,6 @@ _config_display_some_debug(struct MPT3SAS_ADAPTER *ioc, u16 smid,
ioc->name, le16_to_cpu(mpi_reply->IOCStatus),
le32_to_cpu(mpi_reply->IOCLogInfo));
}
#endif
/**
* _config_alloc_config_dma_memory - obtain physical memory
@ -255,9 +253,7 @@ mpt3sas_config_done(struct MPT3SAS_ADAPTER *ioc, u16 smid, u8 msix_index,
mpi_reply->MsgLength*4);
}
ioc->config_cmds.status &= ~MPT3_CMD_PENDING;
#ifdef CONFIG_SCSI_MPT3SAS_LOGGING
_config_display_some_debug(ioc, smid, "config_done", mpi_reply);
#endif
ioc->config_cmds.smid = USHRT_MAX;
complete(&ioc->config_cmds.done);
return 1;
@ -387,9 +383,7 @@ _config_request(struct MPT3SAS_ADAPTER *ioc, Mpi2ConfigRequest_t
config_request = mpt3sas_base_get_msg_frame(ioc, smid);
ioc->config_cmds.smid = smid;
memcpy(config_request, mpi_request, sizeof(Mpi2ConfigRequest_t));
#ifdef CONFIG_SCSI_MPT3SAS_LOGGING
_config_display_some_debug(ioc, smid, "config_request", NULL);
#endif
init_completion(&ioc->config_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid);
timeleft = wait_for_completion_timeout(&ioc->config_cmds.done,
@ -871,6 +865,42 @@ mpt3sas_config_set_iounit_pg1(struct MPT3SAS_ADAPTER *ioc,
return r;
}
/**
* mpt3sas_config_get_iounit_pg3 - obtain iounit page 3
* @ioc: per adapter object
* @mpi_reply: reply mf payload returned from firmware
* @config_page: contents of the config page
* @sz: size of buffer passed in config_page
* Context: sleep.
*
* Returns 0 for success, non-zero for failure.
*/
int
mpt3sas_config_get_iounit_pg3(struct MPT3SAS_ADAPTER *ioc,
Mpi2ConfigReply_t *mpi_reply, Mpi2IOUnitPage3_t *config_page, u16 sz)
{
Mpi2ConfigRequest_t mpi_request;
int r;
memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t));
mpi_request.Function = MPI2_FUNCTION_CONFIG;
mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER;
mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_IO_UNIT;
mpi_request.Header.PageNumber = 3;
mpi_request.Header.PageVersion = MPI2_IOUNITPAGE3_PAGEVERSION;
ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
r = _config_request(ioc, &mpi_request, mpi_reply,
MPT3_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
if (r)
goto out;
mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT;
r = _config_request(ioc, &mpi_request, mpi_reply,
MPT3_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page, sz);
out:
return r;
}
/**
* mpt3sas_config_get_iounit_pg8 - obtain iounit page 8
* @ioc: per adapter object

View File

@ -78,7 +78,6 @@ enum block_state {
BLOCKING,
};
#ifdef CONFIG_SCSI_MPT3SAS_LOGGING
/**
* _ctl_sas_device_find_by_handle - sas device search
* @ioc: per adapter object
@ -254,8 +253,6 @@ _ctl_display_some_debug(struct MPT3SAS_ADAPTER *ioc, u16 smid,
}
}
#endif
/**
* mpt3sas_ctl_done - ctl module completion routine
* @ioc: per adapter object
@ -302,9 +299,7 @@ mpt3sas_ctl_done(struct MPT3SAS_ADAPTER *ioc, u16 smid, u8 msix_index,
}
}
}
#ifdef CONFIG_SCSI_MPT3SAS_LOGGING
_ctl_display_some_debug(ioc, smid, "ctl_done", mpi_reply);
#endif
ioc->ctl_cmds.status &= ~MPT3_CMD_PENDING;
complete(&ioc->ctl_cmds.done);
return 1;
@ -414,20 +409,31 @@ mpt3sas_ctl_event_callback(struct MPT3SAS_ADAPTER *ioc, u8 msix_index,
* _ctl_verify_adapter - validates ioc_number passed from application
* @ioc: per adapter object
* @iocpp: The ioc pointer is returned in this.
* @mpi_version: will be MPI2_VERSION for mpt2ctl ioctl device &
* MPI25_VERSION for mpt3ctl ioctl device.
*
* Return (-1) means error, else ioc_number.
*/
static int
_ctl_verify_adapter(int ioc_number, struct MPT3SAS_ADAPTER **iocpp)
_ctl_verify_adapter(int ioc_number, struct MPT3SAS_ADAPTER **iocpp,
int mpi_version)
{
struct MPT3SAS_ADAPTER *ioc;
/* global ioc lock to protect controller on list operations */
spin_lock(&gioc_lock);
list_for_each_entry(ioc, &mpt3sas_ioc_list, list) {
if (ioc->id != ioc_number)
continue;
/* Check whether this ioctl command is from right
* ioctl device or not, if not continue the search.
*/
if (ioc->hba_mpi_version_belonged != mpi_version)
continue;
spin_unlock(&gioc_lock);
*iocpp = ioc;
return ioc_number;
}
spin_unlock(&gioc_lock);
*iocpp = NULL;
return -1;
}
@ -497,7 +503,7 @@ mpt3sas_ctl_reset_handler(struct MPT3SAS_ADAPTER *ioc, int reset_phase)
*
* Called when application request fasyn callback handler.
*/
static int
int
_ctl_fasync(int fd, struct file *filep, int mode)
{
return fasync_helper(fd, filep, mode, &async_queue);
@ -509,17 +515,22 @@ _ctl_fasync(int fd, struct file *filep, int mode)
* @wait -
*
*/
static unsigned int
unsigned int
_ctl_poll(struct file *filep, poll_table *wait)
{
struct MPT3SAS_ADAPTER *ioc;
poll_wait(filep, &ctl_poll_wait, wait);
/* global ioc lock to protect controller on list operations */
spin_lock(&gioc_lock);
list_for_each_entry(ioc, &mpt3sas_ioc_list, list) {
if (ioc->aen_event_read_flag)
if (ioc->aen_event_read_flag) {
spin_unlock(&gioc_lock);
return POLLIN | POLLRDNORM;
}
}
spin_unlock(&gioc_lock);
return 0;
}
@ -759,9 +770,7 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
psge = (void *)request + (karg.data_sge_offset*4);
/* send command to firmware */
#ifdef CONFIG_SCSI_MPT3SAS_LOGGING
_ctl_display_some_debug(ioc, smid, "ctl_request", NULL);
#endif
init_completion(&ioc->ctl_cmds.done);
switch (mpi_request->Function) {
@ -916,7 +925,6 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
mpi_reply = ioc->ctl_cmds.reply;
ioc_status = le16_to_cpu(mpi_reply->IOCStatus) & MPI2_IOCSTATUS_MASK;
#ifdef CONFIG_SCSI_MPT3SAS_LOGGING
if (mpi_reply->Function == MPI2_FUNCTION_SCSI_TASK_MGMT &&
(ioc->logging_level & MPT_DEBUG_TM)) {
Mpi2SCSITaskManagementReply_t *tm_reply =
@ -929,7 +937,7 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
le32_to_cpu(tm_reply->IOCLogInfo),
le32_to_cpu(tm_reply->TerminationCount));
}
#endif
/* copy out xdata to user */
if (data_in_sz) {
if (copy_to_user(karg.data_in_buf_ptr, data_in,
@ -1023,7 +1031,6 @@ _ctl_getiocinfo(struct MPT3SAS_ADAPTER *ioc, void __user *arg)
__func__));
memset(&karg, 0 , sizeof(karg));
karg.adapter_type = MPT3_IOCTL_INTERFACE_SAS3;
if (ioc->pfacts)
karg.port_number = ioc->pfacts[0].PortNumber;
karg.hw_rev = ioc->pdev->revision;
@ -1035,9 +1042,21 @@ _ctl_getiocinfo(struct MPT3SAS_ADAPTER *ioc, void __user *arg)
karg.pci_information.u.bits.function = PCI_FUNC(ioc->pdev->devfn);
karg.pci_information.segment_id = pci_domain_nr(ioc->pdev->bus);
karg.firmware_version = ioc->facts.FWVersion.Word;
strcpy(karg.driver_version, MPT3SAS_DRIVER_NAME);
strcpy(karg.driver_version, ioc->driver_name);
strcat(karg.driver_version, "-");
strcat(karg.driver_version, MPT3SAS_DRIVER_VERSION);
switch (ioc->hba_mpi_version_belonged) {
case MPI2_VERSION:
if (ioc->is_warpdrive)
karg.adapter_type = MPT2_IOCTL_INTERFACE_SAS2_SSS6200;
else
karg.adapter_type = MPT2_IOCTL_INTERFACE_SAS2;
strcat(karg.driver_version, MPT2SAS_DRIVER_VERSION);
break;
case MPI25_VERSION:
karg.adapter_type = MPT3_IOCTL_INTERFACE_SAS3;
strcat(karg.driver_version, MPT3SAS_DRIVER_VERSION);
break;
}
karg.bios_version = le32_to_cpu(ioc->bios_pg3.BiosVersion);
if (copy_to_user(arg, &karg, sizeof(karg))) {
@ -2181,12 +2200,14 @@ _ctl_compat_mpt_command(struct MPT3SAS_ADAPTER *ioc, unsigned cmd,
* _ctl_ioctl_main - main ioctl entry point
* @file - (struct file)
* @cmd - ioctl opcode
* @arg -
* compat - handles 32 bit applications in 64bit os
* @arg - user space data buffer
* @compat - handles 32 bit applications in 64bit os
* @mpi_version: will be MPI2_VERSION for mpt2ctl ioctl device &
* MPI25_VERSION for mpt3ctl ioctl device.
*/
static long
_ctl_ioctl_main(struct file *file, unsigned int cmd, void __user *arg,
u8 compat)
u8 compat, u16 mpi_version)
{
struct MPT3SAS_ADAPTER *ioc;
struct mpt3_ioctl_header ioctl_header;
@ -2201,19 +2222,29 @@ _ctl_ioctl_main(struct file *file, unsigned int cmd, void __user *arg,
return -EFAULT;
}
if (_ctl_verify_adapter(ioctl_header.ioc_number, &ioc) == -1 || !ioc)
if (_ctl_verify_adapter(ioctl_header.ioc_number,
&ioc, mpi_version) == -1 || !ioc)
return -ENODEV;
/* pci_access_mutex lock acquired by ioctl path */
mutex_lock(&ioc->pci_access_mutex);
if (ioc->shost_recovery || ioc->pci_error_recovery ||
ioc->is_driver_loading)
return -EAGAIN;
ioc->is_driver_loading || ioc->remove_host) {
ret = -EAGAIN;
goto out_unlock_pciaccess;
}
state = (file->f_flags & O_NONBLOCK) ? NON_BLOCKING : BLOCKING;
if (state == NON_BLOCKING) {
if (!mutex_trylock(&ioc->ctl_cmds.mutex))
return -EAGAIN;
} else if (mutex_lock_interruptible(&ioc->ctl_cmds.mutex))
return -ERESTARTSYS;
if (!mutex_trylock(&ioc->ctl_cmds.mutex)) {
ret = -EAGAIN;
goto out_unlock_pciaccess;
}
} else if (mutex_lock_interruptible(&ioc->ctl_cmds.mutex)) {
ret = -ERESTARTSYS;
goto out_unlock_pciaccess;
}
switch (cmd) {
@ -2294,39 +2325,78 @@ _ctl_ioctl_main(struct file *file, unsigned int cmd, void __user *arg,
}
mutex_unlock(&ioc->ctl_cmds.mutex);
out_unlock_pciaccess:
mutex_unlock(&ioc->pci_access_mutex);
return ret;
}
/**
* _ctl_ioctl - main ioctl entry point (unlocked)
* _ctl_ioctl - mpt3ctl main ioctl entry point (unlocked)
* @file - (struct file)
* @cmd - ioctl opcode
* @arg -
*/
static long
long
_ctl_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
long ret;
ret = _ctl_ioctl_main(file, cmd, (void __user *)arg, 0);
/* pass MPI25_VERSION value, to indicate that this ioctl cmd
* came from mpt3ctl ioctl device.
*/
ret = _ctl_ioctl_main(file, cmd, (void __user *)arg, 0, MPI25_VERSION);
return ret;
}
/**
* _ctl_mpt2_ioctl - mpt2ctl main ioctl entry point (unlocked)
* @file - (struct file)
* @cmd - ioctl opcode
* @arg -
*/
long
_ctl_mpt2_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
long ret;
/* pass MPI2_VERSION value, to indicate that this ioctl cmd
* came from mpt2ctl ioctl device.
*/
ret = _ctl_ioctl_main(file, cmd, (void __user *)arg, 0, MPI2_VERSION);
return ret;
}
#ifdef CONFIG_COMPAT
/**
* _ctl_ioctl_compat - main ioctl entry point (compat)
*_ ctl_ioctl_compat - main ioctl entry point (compat)
* @file -
* @cmd -
* @arg -
*
* This routine handles 32 bit applications in 64bit os.
*/
static long
long
_ctl_ioctl_compat(struct file *file, unsigned cmd, unsigned long arg)
{
long ret;
ret = _ctl_ioctl_main(file, cmd, (void __user *)arg, 1);
ret = _ctl_ioctl_main(file, cmd, (void __user *)arg, 1, MPI25_VERSION);
return ret;
}
/**
*_ ctl_mpt2_ioctl_compat - main ioctl entry point (compat)
* @file -
* @cmd -
* @arg -
*
* This routine handles 32 bit applications in 64bit os.
*/
long
_ctl_mpt2_ioctl_compat(struct file *file, unsigned cmd, unsigned long arg)
{
long ret;
ret = _ctl_ioctl_main(file, cmd, (void __user *)arg, 1, MPI2_VERSION);
return ret;
}
#endif
@ -2713,6 +2783,82 @@ _ctl_ioc_reply_queue_count_show(struct device *cdev,
static DEVICE_ATTR(reply_queue_count, S_IRUGO, _ctl_ioc_reply_queue_count_show,
NULL);
/**
* _ctl_BRM_status_show - Backup Rail Monitor Status
* @cdev - pointer to embedded class device
* @buf - the buffer returned
*
* This is number of reply queues
*
* A sysfs 'read-only' shost attribute.
*/
static ssize_t
_ctl_BRM_status_show(struct device *cdev, struct device_attribute *attr,
char *buf)
{
struct Scsi_Host *shost = class_to_shost(cdev);
struct MPT3SAS_ADAPTER *ioc = shost_priv(shost);
Mpi2IOUnitPage3_t *io_unit_pg3 = NULL;
Mpi2ConfigReply_t mpi_reply;
u16 backup_rail_monitor_status = 0;
u16 ioc_status;
int sz;
ssize_t rc = 0;
if (!ioc->is_warpdrive) {
pr_err(MPT3SAS_FMT "%s: BRM attribute is only for"
" warpdrive\n", ioc->name, __func__);
goto out;
}
/* pci_access_mutex lock acquired by sysfs show path */
mutex_lock(&ioc->pci_access_mutex);
if (ioc->pci_error_recovery || ioc->remove_host) {
mutex_unlock(&ioc->pci_access_mutex);
return 0;
}
/* allocate upto GPIOVal 36 entries */
sz = offsetof(Mpi2IOUnitPage3_t, GPIOVal) + (sizeof(u16) * 36);
io_unit_pg3 = kzalloc(sz, GFP_KERNEL);
if (!io_unit_pg3) {
pr_err(MPT3SAS_FMT "%s: failed allocating memory "
"for iounit_pg3: (%d) bytes\n", ioc->name, __func__, sz);
goto out;
}
if (mpt3sas_config_get_iounit_pg3(ioc, &mpi_reply, io_unit_pg3, sz) !=
0) {
pr_err(MPT3SAS_FMT
"%s: failed reading iounit_pg3\n", ioc->name,
__func__);
goto out;
}
ioc_status = le16_to_cpu(mpi_reply.IOCStatus) & MPI2_IOCSTATUS_MASK;
if (ioc_status != MPI2_IOCSTATUS_SUCCESS) {
pr_err(MPT3SAS_FMT "%s: iounit_pg3 failed with "
"ioc_status(0x%04x)\n", ioc->name, __func__, ioc_status);
goto out;
}
if (io_unit_pg3->GPIOCount < 25) {
pr_err(MPT3SAS_FMT "%s: iounit_pg3->GPIOCount less than "
"25 entries, detected (%d) entries\n", ioc->name, __func__,
io_unit_pg3->GPIOCount);
goto out;
}
/* BRM status is in bit zero of GPIOVal[24] */
backup_rail_monitor_status = le16_to_cpu(io_unit_pg3->GPIOVal[24]);
rc = snprintf(buf, PAGE_SIZE, "%d\n", (backup_rail_monitor_status & 1));
out:
kfree(io_unit_pg3);
mutex_unlock(&ioc->pci_access_mutex);
return rc;
}
static DEVICE_ATTR(BRM_status, S_IRUGO, _ctl_BRM_status_show, NULL);
struct DIAG_BUFFER_START {
__le32 Size;
__le32 DiagVersion;
@ -3165,6 +3311,7 @@ struct device_attribute *mpt3sas_host_attrs[] = {
&dev_attr_diag_trigger_event,
&dev_attr_diag_trigger_scsi,
&dev_attr_diag_trigger_mpi,
&dev_attr_BRM_status,
NULL,
};
@ -3218,6 +3365,7 @@ struct device_attribute *mpt3sas_dev_attrs[] = {
NULL,
};
/* file operations table for mpt3ctl device */
static const struct file_operations ctl_fops = {
.owner = THIS_MODULE,
.unlocked_ioctl = _ctl_ioctl,
@ -3228,23 +3376,53 @@ static const struct file_operations ctl_fops = {
#endif
};
/* file operations table for mpt2ctl device */
static const struct file_operations ctl_gen2_fops = {
.owner = THIS_MODULE,
.unlocked_ioctl = _ctl_mpt2_ioctl,
.poll = _ctl_poll,
.fasync = _ctl_fasync,
#ifdef CONFIG_COMPAT
.compat_ioctl = _ctl_mpt2_ioctl_compat,
#endif
};
static struct miscdevice ctl_dev = {
.minor = MPT3SAS_MINOR,
.name = MPT3SAS_DEV_NAME,
.fops = &ctl_fops,
};
static struct miscdevice gen2_ctl_dev = {
.minor = MPT2SAS_MINOR,
.name = MPT2SAS_DEV_NAME,
.fops = &ctl_gen2_fops,
};
/**
* mpt3sas_ctl_init - main entry point for ctl.
*
*/
void
mpt3sas_ctl_init(void)
mpt3sas_ctl_init(ushort hbas_to_enumerate)
{
async_queue = NULL;
if (misc_register(&ctl_dev) < 0)
pr_err("%s can't register misc device [minor=%d]\n",
MPT3SAS_DRIVER_NAME, MPT3SAS_MINOR);
/* Don't register mpt3ctl ioctl device if
* hbas_to_enumarate is one.
*/
if (hbas_to_enumerate != 1)
if (misc_register(&ctl_dev) < 0)
pr_err("%s can't register misc device [minor=%d]\n",
MPT3SAS_DRIVER_NAME, MPT3SAS_MINOR);
/* Don't register mpt3ctl ioctl device if
* hbas_to_enumarate is two.
*/
if (hbas_to_enumerate != 2)
if (misc_register(&gen2_ctl_dev) < 0)
pr_err("%s can't register misc device [minor=%d]\n",
MPT2SAS_DRIVER_NAME, MPT2SAS_MINOR);
init_waitqueue_head(&ctl_poll_wait);
}
@ -3254,7 +3432,7 @@ mpt3sas_ctl_init(void)
*
*/
void
mpt3sas_ctl_exit(void)
mpt3sas_ctl_exit(ushort hbas_to_enumerate)
{
struct MPT3SAS_ADAPTER *ioc;
int i;
@ -3279,5 +3457,8 @@ mpt3sas_ctl_exit(void)
kfree(ioc->event_log);
}
misc_deregister(&ctl_dev);
if (hbas_to_enumerate != 1)
misc_deregister(&ctl_dev);
if (hbas_to_enumerate != 2)
misc_deregister(&gen2_ctl_dev);
}

View File

@ -50,10 +50,13 @@
#include <linux/miscdevice.h>
#endif
#ifndef MPT2SAS_MINOR
#define MPT2SAS_MINOR (MPT_MINOR + 1)
#endif
#ifndef MPT3SAS_MINOR
#define MPT3SAS_MINOR (MPT_MINOR + 2)
#endif
#define MPT2SAS_DEV_NAME "mpt2ctl"
#define MPT3SAS_DEV_NAME "mpt3ctl"
#define MPT3_MAGIC_NUMBER 'L'
#define MPT3_IOCTL_DEFAULT_TIMEOUT (10) /* in seconds */
@ -138,6 +141,7 @@ struct mpt3_ioctl_pci_info {
#define MPT2_IOCTL_INTERFACE_FC_IP (0x02)
#define MPT2_IOCTL_INTERFACE_SAS (0x03)
#define MPT2_IOCTL_INTERFACE_SAS2 (0x04)
#define MPT2_IOCTL_INTERFACE_SAS2_SSS6200 (0x05)
#define MPT3_IOCTL_INTERFACE_SAS3 (0x06)
#define MPT2_IOCTL_VERSION_LENGTH (32)

View File

@ -68,20 +68,11 @@
#define MPT_DEBUG_TRIGGER_DIAG 0x00200000
/*
* CONFIG_SCSI_MPT3SAS_LOGGING - enabled in Kconfig
*/
#ifdef CONFIG_SCSI_MPT3SAS_LOGGING
#define MPT_CHECK_LOGGING(IOC, CMD, BITS) \
{ \
if (IOC->logging_level & BITS) \
CMD; \
}
#else
#define MPT_CHECK_LOGGING(IOC, CMD, BITS)
#endif /* CONFIG_SCSI_MPT3SAS_LOGGING */
/*
* debug macros
@ -153,7 +144,7 @@
/* inline functions for dumping debug data*/
#ifdef CONFIG_SCSI_MPT3SAS_LOGGING
/**
* _debug_dump_mf - print message frame contents
* @mpi_request: pointer to message frame
@ -211,10 +202,5 @@ _debug_dump_config(void *mpi_request, int sz)
}
pr_info("\n");
}
#else
#define _debug_dump_mf(mpi_request, sz)
#define _debug_dump_reply(mpi_request, sz)
#define _debug_dump_config(mpi_request, sz)
#endif /* CONFIG_SCSI_MPT3SAS_LOGGING */
#endif /* MPT3SAS_DEBUG_H_INCLUDED */

File diff suppressed because it is too large Load Diff

View File

@ -734,7 +734,7 @@ mpt3sas_transport_port_add(struct MPT3SAS_ADAPTER *ioc, u16 handle,
rphy->identify = mpt3sas_port->remote_identify;
if (mpt3sas_port->remote_identify.device_type == SAS_END_DEVICE) {
sas_device = mpt3sas_scsih_sas_device_find_by_sas_address(ioc,
sas_device = mpt3sas_get_sdev_by_addr(ioc,
mpt3sas_port->remote_identify.sas_address);
if (!sas_device) {
dfailprintk(ioc, printk(MPT3SAS_FMT
@ -750,8 +750,10 @@ mpt3sas_transport_port_add(struct MPT3SAS_ADAPTER *ioc, u16 handle,
ioc->name, __FILE__, __LINE__, __func__);
}
if (mpt3sas_port->remote_identify.device_type == SAS_END_DEVICE)
if (mpt3sas_port->remote_identify.device_type == SAS_END_DEVICE) {
sas_device->pend_sas_rphy_add = 0;
sas_device_put(sas_device);
}
if ((ioc->logging_level & MPT_DEBUG_TRANSPORT))
dev_printk(KERN_INFO, &rphy->dev,
@ -1324,15 +1326,17 @@ _transport_get_enclosure_identifier(struct sas_rphy *rphy, u64 *identifier)
int rc;
spin_lock_irqsave(&ioc->sas_device_lock, flags);
sas_device = mpt3sas_scsih_sas_device_find_by_sas_address(ioc,
sas_device = __mpt3sas_get_sdev_by_addr(ioc,
rphy->identify.sas_address);
if (sas_device) {
*identifier = sas_device->enclosure_logical_id;
rc = 0;
sas_device_put(sas_device);
} else {
*identifier = 0;
rc = -ENXIO;
}
spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
return rc;
}
@ -1352,12 +1356,14 @@ _transport_get_bay_identifier(struct sas_rphy *rphy)
int rc;
spin_lock_irqsave(&ioc->sas_device_lock, flags);
sas_device = mpt3sas_scsih_sas_device_find_by_sas_address(ioc,
sas_device = __mpt3sas_get_sdev_by_addr(ioc,
rphy->identify.sas_address);
if (sas_device)
if (sas_device) {
rc = sas_device->slot;
else
sas_device_put(sas_device);
} else {
rc = -ENXIO;
}
spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
return rc;
}

View File

@ -0,0 +1,344 @@
/*
* Scsi Host Layer for MPT (Message Passing Technology) based controllers
*
* Copyright (C) 2012-2014 LSI Corporation
* Copyright (C) 2013-2015 Avago Technologies
* (mailto: MPT-FusionLinux.pdl@avagotech.com)
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* NO WARRANTY
* THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT
* LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT,
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is
* solely responsible for determining the appropriateness of using and
* distributing the Program and assumes all risks associated with its
* exercise of rights under this Agreement, including but not limited to
* the risks and costs of program errors, damage to or loss of data,
* programs or equipment, and unavailability or interruption of operations.
* DISCLAIMER OF LIABILITY
* NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
* TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
* USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED
* HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES
* You should have received a copy of the GNU General Public License
* along with this program.
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/errno.h>
#include <linux/types.h>
#include <asm/unaligned.h>
#include "mpt3sas_base.h"
/**
* _warpdrive_disable_ddio - Disable direct I/O for all the volumes
* @ioc: per adapter object
*/
static void
_warpdrive_disable_ddio(struct MPT3SAS_ADAPTER *ioc)
{
Mpi2RaidVolPage1_t vol_pg1;
Mpi2ConfigReply_t mpi_reply;
struct _raid_device *raid_device;
u16 handle;
u16 ioc_status;
unsigned long flags;
handle = 0xFFFF;
while (!(mpt3sas_config_get_raid_volume_pg1(ioc, &mpi_reply,
&vol_pg1, MPI2_RAID_VOLUME_PGAD_FORM_GET_NEXT_HANDLE, handle))) {
ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
MPI2_IOCSTATUS_MASK;
if (ioc_status == MPI2_IOCSTATUS_CONFIG_INVALID_PAGE)
break;
handle = le16_to_cpu(vol_pg1.DevHandle);
spin_lock_irqsave(&ioc->raid_device_lock, flags);
raid_device = mpt3sas_raid_device_find_by_handle(ioc, handle);
if (raid_device)
raid_device->direct_io_enabled = 0;
spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
}
return;
}
/**
* mpt3sas_get_num_volumes - Get number of volumes in the ioc
* @ioc: per adapter object
*/
u8
mpt3sas_get_num_volumes(struct MPT3SAS_ADAPTER *ioc)
{
Mpi2RaidVolPage1_t vol_pg1;
Mpi2ConfigReply_t mpi_reply;
u16 handle;
u8 vol_cnt = 0;
u16 ioc_status;
handle = 0xFFFF;
while (!(mpt3sas_config_get_raid_volume_pg1(ioc, &mpi_reply,
&vol_pg1, MPI2_RAID_VOLUME_PGAD_FORM_GET_NEXT_HANDLE, handle))) {
ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
MPI2_IOCSTATUS_MASK;
if (ioc_status == MPI2_IOCSTATUS_CONFIG_INVALID_PAGE)
break;
vol_cnt++;
handle = le16_to_cpu(vol_pg1.DevHandle);
}
return vol_cnt;
}
/**
* mpt3sas_init_warpdrive_properties - Set properties for warpdrive direct I/O.
* @ioc: per adapter object
* @raid_device: the raid_device object
*/
void
mpt3sas_init_warpdrive_properties(struct MPT3SAS_ADAPTER *ioc,
struct _raid_device *raid_device)
{
Mpi2RaidVolPage0_t *vol_pg0;
Mpi2RaidPhysDiskPage0_t pd_pg0;
Mpi2ConfigReply_t mpi_reply;
u16 sz;
u8 num_pds, count;
unsigned long stripe_sz, block_sz;
u8 stripe_exp, block_exp;
u64 dev_max_lba;
if (!ioc->is_warpdrive)
return;
if (ioc->mfg_pg10_hide_flag == MFG_PAGE10_EXPOSE_ALL_DISKS) {
pr_info(MPT3SAS_FMT "WarpDrive : Direct IO is disabled "
"globally as drives are exposed\n", ioc->name);
return;
}
if (mpt3sas_get_num_volumes(ioc) > 1) {
_warpdrive_disable_ddio(ioc);
pr_info(MPT3SAS_FMT "WarpDrive : Direct IO is disabled "
"globally as number of drives > 1\n", ioc->name);
return;
}
if ((mpt3sas_config_get_number_pds(ioc, raid_device->handle,
&num_pds)) || !num_pds) {
pr_info(MPT3SAS_FMT "WarpDrive : Direct IO is disabled "
"Failure in computing number of drives\n", ioc->name);
return;
}
sz = offsetof(Mpi2RaidVolPage0_t, PhysDisk) + (num_pds *
sizeof(Mpi2RaidVol0PhysDisk_t));
vol_pg0 = kzalloc(sz, GFP_KERNEL);
if (!vol_pg0) {
pr_info(MPT3SAS_FMT "WarpDrive : Direct IO is disabled "
"Memory allocation failure for RVPG0\n", ioc->name);
return;
}
if ((mpt3sas_config_get_raid_volume_pg0(ioc, &mpi_reply, vol_pg0,
MPI2_RAID_VOLUME_PGAD_FORM_HANDLE, raid_device->handle, sz))) {
pr_info(MPT3SAS_FMT "WarpDrive : Direct IO is disabled "
"Failure in retrieving RVPG0\n", ioc->name);
kfree(vol_pg0);
return;
}
/*
* WARPDRIVE:If number of physical disks in a volume exceeds the max pds
* assumed for WARPDRIVE, disable direct I/O
*/
if (num_pds > MPT_MAX_WARPDRIVE_PDS) {
pr_warn(MPT3SAS_FMT "WarpDrive : Direct IO is disabled "
"for the drive with handle(0x%04x): num_mem=%d, "
"max_mem_allowed=%d\n", ioc->name, raid_device->handle,
num_pds, MPT_MAX_WARPDRIVE_PDS);
kfree(vol_pg0);
return;
}
for (count = 0; count < num_pds; count++) {
if (mpt3sas_config_get_phys_disk_pg0(ioc, &mpi_reply,
&pd_pg0, MPI2_PHYSDISK_PGAD_FORM_PHYSDISKNUM,
vol_pg0->PhysDisk[count].PhysDiskNum) ||
pd_pg0.DevHandle == MPT3SAS_INVALID_DEVICE_HANDLE) {
pr_info(MPT3SAS_FMT "WarpDrive : Direct IO is "
"disabled for the drive with handle(0x%04x) member"
"handle retrieval failed for member number=%d\n",
ioc->name, raid_device->handle,
vol_pg0->PhysDisk[count].PhysDiskNum);
goto out_error;
}
/* Disable direct I/O if member drive lba exceeds 4 bytes */
dev_max_lba = le64_to_cpu(pd_pg0.DeviceMaxLBA);
if (dev_max_lba >> 32) {
pr_info(MPT3SAS_FMT "WarpDrive : Direct IO is "
"disabled for the drive with handle(0x%04x) member"
" handle (0x%04x) unsupported max lba 0x%016llx\n",
ioc->name, raid_device->handle,
le16_to_cpu(pd_pg0.DevHandle),
(unsigned long long)dev_max_lba);
goto out_error;
}
raid_device->pd_handle[count] = le16_to_cpu(pd_pg0.DevHandle);
}
/*
* Assumption for WD: Direct I/O is not supported if the volume is
* not RAID0
*/
if (raid_device->volume_type != MPI2_RAID_VOL_TYPE_RAID0) {
pr_info(MPT3SAS_FMT "WarpDrive : Direct IO is disabled "
"for the drive with handle(0x%04x): type=%d, "
"s_sz=%uK, blk_size=%u\n", ioc->name,
raid_device->handle, raid_device->volume_type,
(le32_to_cpu(vol_pg0->StripeSize) *
le16_to_cpu(vol_pg0->BlockSize)) / 1024,
le16_to_cpu(vol_pg0->BlockSize));
goto out_error;
}
stripe_sz = le32_to_cpu(vol_pg0->StripeSize);
stripe_exp = find_first_bit(&stripe_sz, 32);
if (stripe_exp == 32) {
pr_info(MPT3SAS_FMT "WarpDrive : Direct IO is disabled "
"for the drive with handle(0x%04x) invalid stripe sz %uK\n",
ioc->name, raid_device->handle,
(le32_to_cpu(vol_pg0->StripeSize) *
le16_to_cpu(vol_pg0->BlockSize)) / 1024);
goto out_error;
}
raid_device->stripe_exponent = stripe_exp;
block_sz = le16_to_cpu(vol_pg0->BlockSize);
block_exp = find_first_bit(&block_sz, 16);
if (block_exp == 16) {
pr_info(MPT3SAS_FMT "WarpDrive : Direct IO is disabled "
"for the drive with handle(0x%04x) invalid block sz %u\n",
ioc->name, raid_device->handle,
le16_to_cpu(vol_pg0->BlockSize));
goto out_error;
}
raid_device->block_exponent = block_exp;
raid_device->direct_io_enabled = 1;
pr_info(MPT3SAS_FMT "WarpDrive : Direct IO is Enabled for the drive"
" with handle(0x%04x)\n", ioc->name, raid_device->handle);
/*
* WARPDRIVE: Though the following fields are not used for direct IO,
* stored for future purpose:
*/
raid_device->max_lba = le64_to_cpu(vol_pg0->MaxLBA);
raid_device->stripe_sz = le32_to_cpu(vol_pg0->StripeSize);
raid_device->block_sz = le16_to_cpu(vol_pg0->BlockSize);
kfree(vol_pg0);
return;
out_error:
raid_device->direct_io_enabled = 0;
for (count = 0; count < num_pds; count++)
raid_device->pd_handle[count] = 0;
kfree(vol_pg0);
return;
}
/**
* mpt3sas_scsi_direct_io_get - returns direct io flag
* @ioc: per adapter object
* @smid: system request message index
*
* Returns the smid stored scmd pointer.
*/
inline u8
mpt3sas_scsi_direct_io_get(struct MPT3SAS_ADAPTER *ioc, u16 smid)
{
return ioc->scsi_lookup[smid - 1].direct_io;
}
/**
* mpt3sas_scsi_direct_io_set - sets direct io flag
* @ioc: per adapter object
* @smid: system request message index
* @direct_io: Zero or non-zero value to set in the direct_io flag
*
* Returns Nothing.
*/
inline void
mpt3sas_scsi_direct_io_set(struct MPT3SAS_ADAPTER *ioc, u16 smid, u8 direct_io)
{
ioc->scsi_lookup[smid - 1].direct_io = direct_io;
}
/**
* mpt3sas_setup_direct_io - setup MPI request for WARPDRIVE Direct I/O
* @ioc: per adapter object
* @scmd: pointer to scsi command object
* @raid_device: pointer to raid device data structure
* @mpi_request: pointer to the SCSI_IO reqest message frame
* @smid: system request message index
*
* Returns nothing
*/
void
mpt3sas_setup_direct_io(struct MPT3SAS_ADAPTER *ioc, struct scsi_cmnd *scmd,
struct _raid_device *raid_device, Mpi2SCSIIORequest_t *mpi_request,
u16 smid)
{
sector_t v_lba, p_lba, stripe_off, column, io_size;
u32 stripe_sz, stripe_exp;
u8 num_pds, cmd = scmd->cmnd[0];
if (cmd != READ_10 && cmd != WRITE_10 &&
cmd != READ_16 && cmd != WRITE_16)
return;
if (cmd == READ_10 || cmd == WRITE_10)
v_lba = get_unaligned_be32(&mpi_request->CDB.CDB32[2]);
else
v_lba = get_unaligned_be64(&mpi_request->CDB.CDB32[2]);
io_size = scsi_bufflen(scmd) >> raid_device->block_exponent;
if (v_lba + io_size - 1 > raid_device->max_lba)
return;
stripe_sz = raid_device->stripe_sz;
stripe_exp = raid_device->stripe_exponent;
stripe_off = v_lba & (stripe_sz - 1);
/* Return unless IO falls within a stripe */
if (stripe_off + io_size > stripe_sz)
return;
num_pds = raid_device->num_pds;
p_lba = v_lba >> stripe_exp;
column = sector_div(p_lba, num_pds);
p_lba = (p_lba << stripe_exp) + stripe_off;
mpi_request->DevHandle = cpu_to_le16(raid_device->pd_handle[column]);
if (cmd == READ_10 || cmd == WRITE_10)
put_unaligned_be32(lower_32_bits(p_lba),
&mpi_request->CDB.CDB32[2]);
else
put_unaligned_be64(p_lba, &mpi_request->CDB.CDB32[2]);
mpt3sas_scsi_direct_io_set(ioc, smid, 1);
}

View File

@ -65,7 +65,6 @@ static struct scsi_host_template mvs_sht = {
.target_destroy = sas_target_destroy,
.ioctl = sas_ioctl,
.shost_attrs = mvst_host_attrs,
.use_blk_tags = 1,
.track_queue_depth = 1,
};
@ -641,9 +640,9 @@ static void mvs_pci_remove(struct pci_dev *pdev)
tasklet_kill(&((struct mvs_prv_info *)sha->lldd_ha)->mv_tasklet);
#endif
scsi_remove_host(mvi->shost);
sas_unregister_ha(sha);
sas_remove_host(mvi->shost);
scsi_remove_host(mvi->shost);
MVS_CHIP_DISP->interrupt_disable(mvi);
free_irq(mvi->pdev->irq, sha);

View File

@ -31,6 +31,7 @@
#include <linux/spinlock.h>
#include <linux/interrupt.h>
#include <linux/delay.h>
#include <linux/ktime.h>
#include <linux/blkdev.h>
#include <linux/io.h>
#include <scsi/scsi.h>
@ -858,8 +859,8 @@ static void mvumi_hs_build_page(struct mvumi_hba *mhba,
struct mvumi_hs_page2 *hs_page2;
struct mvumi_hs_page4 *hs_page4;
struct mvumi_hs_page3 *hs_page3;
struct timeval time;
unsigned int local_time;
u64 time;
u64 local_time;
switch (hs_header->page_code) {
case HS_PAGE_HOST_INFO:
@ -877,9 +878,8 @@ static void mvumi_hs_build_page(struct mvumi_hba *mhba,
hs_page2->slot_number = 0;
hs_page2->intr_level = 0;
hs_page2->intr_vector = 0;
do_gettimeofday(&time);
local_time = (unsigned int) (time.tv_sec -
(sys_tz.tz_minuteswest * 60));
time = ktime_get_real_seconds();
local_time = (time - (sys_tz.tz_minuteswest * 60));
hs_page2->seconds_since1970 = local_time;
hs_header->checksum = mvumi_calculate_checksum(hs_header,
hs_header->frame_length);

View File

@ -51,6 +51,8 @@ enum chip_flavors {
chip_8076,
chip_8077,
chip_8006,
chip_8070,
chip_8072
};
enum phy_speed {

View File

@ -58,6 +58,8 @@ static const struct pm8001_chip_info pm8001_chips[] = {
[chip_8076] = {0, 16, &pm8001_80xx_dispatch,},
[chip_8077] = {0, 16, &pm8001_80xx_dispatch,},
[chip_8006] = {0, 16, &pm8001_80xx_dispatch,},
[chip_8070] = {0, 8, &pm8001_80xx_dispatch,},
[chip_8072] = {0, 16, &pm8001_80xx_dispatch,},
};
static int pm8001_id;
@ -88,7 +90,6 @@ static struct scsi_host_template pm8001_sht = {
.target_destroy = sas_target_destroy,
.ioctl = sas_ioctl,
.shost_attrs = pm8001_host_attrs,
.use_blk_tags = 1,
.track_queue_depth = 1,
};
@ -480,7 +481,8 @@ static struct pm8001_hba_info *pm8001_pci_alloc(struct pci_dev *pdev,
#ifdef PM8001_USE_TASKLET
/* Tasklet for non msi-x interrupt handler */
if ((!pdev->msix_cap) || (pm8001_ha->chip_id == chip_8001))
if ((!pdev->msix_cap || !pci_msi_enabled())
|| (pm8001_ha->chip_id == chip_8001))
tasklet_init(&pm8001_ha->tasklet[0], pm8001_tasklet,
(unsigned long)&(pm8001_ha->irq_vector[0]));
else
@ -634,6 +636,11 @@ static void pm8001_init_sas_add(struct pm8001_hba_info *pm8001_ha)
payload.minor_function = 0;
payload.length = 128;
}
} else if ((pm8001_ha->chip_id == chip_8070 ||
pm8001_ha->chip_id == chip_8072) &&
pm8001_ha->pdev->subsystem_vendor == PCI_VENDOR_ID_ATTO) {
payload.minor_function = 4;
payload.length = 4096;
} else {
payload.minor_function = 1;
payload.length = 4096;
@ -660,6 +667,11 @@ static void pm8001_init_sas_add(struct pm8001_hba_info *pm8001_ha)
else if (deviceid == 0x0042)
pm8001_ha->sas_addr[j] =
payload.func_specific[0x010 + i];
} else if ((pm8001_ha->chip_id == chip_8070 ||
pm8001_ha->chip_id == chip_8072) &&
pm8001_ha->pdev->subsystem_vendor == PCI_VENDOR_ID_ATTO) {
pm8001_ha->sas_addr[j] =
payload.func_specific[0x010 + i];
} else
pm8001_ha->sas_addr[j] =
payload.func_specific[0x804 + i];
@ -720,6 +732,153 @@ static int pm8001_get_phy_settings_info(struct pm8001_hba_info *pm8001_ha)
return 0;
}
struct pm8001_mpi3_phy_pg_trx_config {
u32 LaneLosCfg;
u32 LanePgaCfg1;
u32 LanePisoCfg1;
u32 LanePisoCfg2;
u32 LanePisoCfg3;
u32 LanePisoCfg4;
u32 LanePisoCfg5;
u32 LanePisoCfg6;
u32 LaneBctCtrl;
};
/**
* pm8001_get_internal_phy_settings : Retrieves the internal PHY settings
* @pm8001_ha : our adapter
* @phycfg : PHY config page to populate
*/
static
void pm8001_get_internal_phy_settings(struct pm8001_hba_info *pm8001_ha,
struct pm8001_mpi3_phy_pg_trx_config *phycfg)
{
phycfg->LaneLosCfg = 0x00000132;
phycfg->LanePgaCfg1 = 0x00203949;
phycfg->LanePisoCfg1 = 0x000000FF;
phycfg->LanePisoCfg2 = 0xFF000001;
phycfg->LanePisoCfg3 = 0xE7011300;
phycfg->LanePisoCfg4 = 0x631C40C0;
phycfg->LanePisoCfg5 = 0xF8102036;
phycfg->LanePisoCfg6 = 0xF74A1000;
phycfg->LaneBctCtrl = 0x00FB33F8;
}
/**
* pm8001_get_external_phy_settings : Retrieves the external PHY settings
* @pm8001_ha : our adapter
* @phycfg : PHY config page to populate
*/
static
void pm8001_get_external_phy_settings(struct pm8001_hba_info *pm8001_ha,
struct pm8001_mpi3_phy_pg_trx_config *phycfg)
{
phycfg->LaneLosCfg = 0x00000132;
phycfg->LanePgaCfg1 = 0x00203949;
phycfg->LanePisoCfg1 = 0x000000FF;
phycfg->LanePisoCfg2 = 0xFF000001;
phycfg->LanePisoCfg3 = 0xE7011300;
phycfg->LanePisoCfg4 = 0x63349140;
phycfg->LanePisoCfg5 = 0xF8102036;
phycfg->LanePisoCfg6 = 0xF80D9300;
phycfg->LaneBctCtrl = 0x00FB33F8;
}
/**
* pm8001_get_phy_mask : Retrieves the mask that denotes if a PHY is int/ext
* @pm8001_ha : our adapter
* @phymask : The PHY mask
*/
static
void pm8001_get_phy_mask(struct pm8001_hba_info *pm8001_ha, int *phymask)
{
switch (pm8001_ha->pdev->subsystem_device) {
case 0x0070: /* H1280 - 8 external 0 internal */
case 0x0072: /* H12F0 - 16 external 0 internal */
*phymask = 0x0000;
break;
case 0x0071: /* H1208 - 0 external 8 internal */
case 0x0073: /* H120F - 0 external 16 internal */
*phymask = 0xFFFF;
break;
case 0x0080: /* H1244 - 4 external 4 internal */
*phymask = 0x00F0;
break;
case 0x0081: /* H1248 - 4 external 8 internal */
*phymask = 0x0FF0;
break;
case 0x0082: /* H1288 - 8 external 8 internal */
*phymask = 0xFF00;
break;
default:
PM8001_INIT_DBG(pm8001_ha,
pm8001_printk("Unknown subsystem device=0x%.04x",
pm8001_ha->pdev->subsystem_device));
}
}
/**
* pm8001_set_phy_settings_ven_117c_12Gb : Configure ATTO 12Gb PHY settings
* @pm8001_ha : our adapter
*/
static
int pm8001_set_phy_settings_ven_117c_12G(struct pm8001_hba_info *pm8001_ha)
{
struct pm8001_mpi3_phy_pg_trx_config phycfg_int;
struct pm8001_mpi3_phy_pg_trx_config phycfg_ext;
int phymask = 0;
int i = 0;
memset(&phycfg_int, 0, sizeof(phycfg_int));
memset(&phycfg_ext, 0, sizeof(phycfg_ext));
pm8001_get_internal_phy_settings(pm8001_ha, &phycfg_int);
pm8001_get_external_phy_settings(pm8001_ha, &phycfg_ext);
pm8001_get_phy_mask(pm8001_ha, &phymask);
for (i = 0; i < pm8001_ha->chip->n_phy; i++) {
if (phymask & (1 << i)) {/* Internal PHY */
pm8001_set_phy_profile_single(pm8001_ha, i,
sizeof(phycfg_int) / sizeof(u32),
(u32 *)&phycfg_int);
} else { /* External PHY */
pm8001_set_phy_profile_single(pm8001_ha, i,
sizeof(phycfg_ext) / sizeof(u32),
(u32 *)&phycfg_ext);
}
}
return 0;
}
/**
* pm8001_configure_phy_settings : Configures PHY settings based on vendor ID.
* @pm8001_ha : our hba.
*/
static int pm8001_configure_phy_settings(struct pm8001_hba_info *pm8001_ha)
{
switch (pm8001_ha->pdev->subsystem_vendor) {
case PCI_VENDOR_ID_ATTO:
if (pm8001_ha->pdev->device == 0x0042) /* 6Gb */
return 0;
else
return pm8001_set_phy_settings_ven_117c_12G(pm8001_ha);
case PCI_VENDOR_ID_ADAPTEC2:
case 0:
return 0;
default:
return pm8001_get_phy_settings_info(pm8001_ha);
}
}
#ifdef PM8001_USE_MSIX
/**
* pm8001_setup_msix - enable MSI-X interrupt
@ -792,7 +951,7 @@ static u32 pm8001_request_irq(struct pm8001_hba_info *pm8001_ha)
pdev = pm8001_ha->pdev;
#ifdef PM8001_USE_MSIX
if (pdev->msix_cap)
if (pdev->msix_cap && pci_msi_enabled())
return pm8001_setup_msix(pm8001_ha);
else {
PM8001_INIT_DBG(pm8001_ha,
@ -803,6 +962,8 @@ static u32 pm8001_request_irq(struct pm8001_hba_info *pm8001_ha)
intx:
/* initialize the INT-X interrupt */
pm8001_ha->irq_vector[0].irq_id = 0;
pm8001_ha->irq_vector[0].drv_inst = pm8001_ha;
rc = request_irq(pdev->irq, pm8001_interrupt_handler_intx, IRQF_SHARED,
DRV_NAME, SHOST_TO_SAS_HA(pm8001_ha->shost));
return rc;
@ -902,12 +1063,9 @@ static int pm8001_pci_probe(struct pci_dev *pdev,
pm8001_init_sas_add(pm8001_ha);
/* phy setting support for motherboard controller */
if (pdev->subsystem_vendor != PCI_VENDOR_ID_ADAPTEC2 &&
pdev->subsystem_vendor != 0) {
rc = pm8001_get_phy_settings_info(pm8001_ha);
if (rc)
goto err_out_shost;
}
if (pm8001_configure_phy_settings(pm8001_ha))
goto err_out_shost;
pm8001_post_sas_ha_init(shost, chip);
rc = sas_register_ha(SHOST_TO_SAS_HA(shost));
if (rc)
@ -937,10 +1095,10 @@ static void pm8001_pci_remove(struct pci_dev *pdev)
struct pm8001_hba_info *pm8001_ha;
int i, j;
pm8001_ha = sha->lldd_ha;
scsi_remove_host(pm8001_ha->shost);
sas_unregister_ha(sha);
sas_remove_host(pm8001_ha->shost);
list_del(&pm8001_ha->list);
scsi_remove_host(pm8001_ha->shost);
PM8001_CHIP_DISP->interrupt_disable(pm8001_ha, 0xFF);
PM8001_CHIP_DISP->chip_soft_rst(pm8001_ha);
@ -956,7 +1114,8 @@ static void pm8001_pci_remove(struct pci_dev *pdev)
#endif
#ifdef PM8001_USE_TASKLET
/* For non-msix and msix interrupts */
if ((!pdev->msix_cap) || (pm8001_ha->chip_id == chip_8001))
if ((!pdev->msix_cap || !pci_msi_enabled()) ||
(pm8001_ha->chip_id == chip_8001))
tasklet_kill(&pm8001_ha->tasklet[0]);
else
for (j = 0; j < PM8001_MAX_MSIX_VEC; j++)
@ -1005,7 +1164,8 @@ static int pm8001_pci_suspend(struct pci_dev *pdev, pm_message_t state)
#endif
#ifdef PM8001_USE_TASKLET
/* For non-msix and msix interrupts */
if ((!pdev->msix_cap) || (pm8001_ha->chip_id == chip_8001))
if ((!pdev->msix_cap || !pci_msi_enabled()) ||
(pm8001_ha->chip_id == chip_8001))
tasklet_kill(&pm8001_ha->tasklet[0]);
else
for (j = 0; j < PM8001_MAX_MSIX_VEC; j++)
@ -1074,7 +1234,8 @@ static int pm8001_pci_resume(struct pci_dev *pdev)
goto err_out_disable;
#ifdef PM8001_USE_TASKLET
/* Tasklet for non msi-x interrupt handler */
if ((!pdev->msix_cap) || (pm8001_ha->chip_id == chip_8001))
if ((!pdev->msix_cap || !pci_msi_enabled()) ||
(pm8001_ha->chip_id == chip_8001))
tasklet_init(&pm8001_ha->tasklet[0], pm8001_tasklet,
(unsigned long)&(pm8001_ha->irq_vector[0]));
else
@ -1087,6 +1248,19 @@ static int pm8001_pci_resume(struct pci_dev *pdev)
for (i = 1; i < pm8001_ha->number_of_intr; i++)
PM8001_CHIP_DISP->interrupt_enable(pm8001_ha, i);
}
/* Chip documentation for the 8070 and 8072 SPCv */
/* states that a 500ms minimum delay is required */
/* before issuing commands. Otherwise, the firmare */
/* will enter an unrecoverable state. */
if (pm8001_ha->chip_id == chip_8070 ||
pm8001_ha->chip_id == chip_8072) {
mdelay(500);
}
/* Spin up the PHYs */
pm8001_ha->flags = PM8001F_RUN_TIME;
for (i = 0; i < pm8001_ha->chip->n_phy; i++) {
pm8001_ha->phy[i].enable_completion = &completion;
@ -1165,6 +1339,20 @@ static struct pci_device_id pm8001_pci_table[] = {
PCI_VENDOR_ID_ADAPTEC2, 0x0808, 0, 0, chip_8077 },
{ PCI_VENDOR_ID_ADAPTEC2, 0x8074,
PCI_VENDOR_ID_ADAPTEC2, 0x0404, 0, 0, chip_8074 },
{ PCI_VENDOR_ID_ATTO, 0x8070,
PCI_VENDOR_ID_ATTO, 0x0070, 0, 0, chip_8070 },
{ PCI_VENDOR_ID_ATTO, 0x8070,
PCI_VENDOR_ID_ATTO, 0x0071, 0, 0, chip_8070 },
{ PCI_VENDOR_ID_ATTO, 0x8072,
PCI_VENDOR_ID_ATTO, 0x0072, 0, 0, chip_8072 },
{ PCI_VENDOR_ID_ATTO, 0x8072,
PCI_VENDOR_ID_ATTO, 0x0073, 0, 0, chip_8072 },
{ PCI_VENDOR_ID_ATTO, 0x8070,
PCI_VENDOR_ID_ATTO, 0x0080, 0, 0, chip_8070 },
{ PCI_VENDOR_ID_ATTO, 0x8072,
PCI_VENDOR_ID_ATTO, 0x0081, 0, 0, chip_8072 },
{ PCI_VENDOR_ID_ATTO, 0x8072,
PCI_VENDOR_ID_ATTO, 0x0082, 0, 0, chip_8072 },
{} /* terminate list */
};
@ -1220,7 +1408,7 @@ MODULE_AUTHOR("Anand Kumar Santhanam <AnandKumar.Santhanam@pmcs.com>");
MODULE_AUTHOR("Sangeetha Gnanasekaran <Sangeetha.Gnanasekaran@pmcs.com>");
MODULE_AUTHOR("Nikith Ganigarakoppal <Nikith.Ganigarakoppal@pmcs.com>");
MODULE_DESCRIPTION(
"PMC-Sierra PM8001/8006/8081/8088/8089/8074/8076/8077 "
"PMC-Sierra PM8001/8006/8081/8088/8089/8074/8076/8077/8070/8072 "
"SAS/SATA controller driver");
MODULE_VERSION(DRV_VERSION);
MODULE_LICENSE("GPL");

View File

@ -106,7 +106,9 @@ do { \
#define DEV_IS_EXPANDER(type) ((type == SAS_EDGE_EXPANDER_DEVICE) || (type == SAS_FANOUT_EXPANDER_DEVICE))
#define IS_SPCV_12G(dev) ((dev->device == 0X8074) \
|| (dev->device == 0X8076) \
|| (dev->device == 0X8077))
|| (dev->device == 0X8077) \
|| (dev->device == 0X8070) \
|| (dev->device == 0X8072))
#define PM8001_NAME_LENGTH 32/* generic length of strings */
extern struct list_head hba_list;
@ -708,6 +710,8 @@ int pm80xx_set_thermal_config(struct pm8001_hba_info *pm8001_ha);
int pm8001_bar4_shift(struct pm8001_hba_info *pm8001_ha, u32 shiftValue);
void pm8001_set_phy_profile(struct pm8001_hba_info *pm8001_ha,
u32 length, u8 *buf);
void pm8001_set_phy_profile_single(struct pm8001_hba_info *pm8001_ha,
u32 phy, u32 length, u32 *buf);
int pm80xx_bar4_shift(struct pm8001_hba_info *pm8001_ha, u32 shiftValue);
ssize_t pm80xx_get_fatal_dump(struct device *cdev,
struct device_attribute *attr, char *buf);

View File

@ -1267,6 +1267,8 @@ pm80xx_chip_soft_rst(struct pm8001_hba_info *pm8001_ha)
/* check iButton feature support for motherboard controller */
if (pm8001_ha->pdev->subsystem_vendor !=
PCI_VENDOR_ID_ADAPTEC2 &&
pm8001_ha->pdev->subsystem_vendor !=
PCI_VENDOR_ID_ATTO &&
pm8001_ha->pdev->subsystem_vendor != 0) {
ibutton0 = pm8001_cr32(pm8001_ha, 0,
MSGU_HOST_SCRATCH_PAD_6);
@ -4576,6 +4578,38 @@ void pm8001_set_phy_profile(struct pm8001_hba_info *pm8001_ha,
}
PM8001_INIT_DBG(pm8001_ha, pm8001_printk("phy settings completed\n"));
}
void pm8001_set_phy_profile_single(struct pm8001_hba_info *pm8001_ha,
u32 phy, u32 length, u32 *buf)
{
u32 tag, opc;
int rc, i;
struct set_phy_profile_req payload;
struct inbound_queue_table *circularQ;
memset(&payload, 0, sizeof(payload));
rc = pm8001_tag_alloc(pm8001_ha, &tag);
if (rc)
PM8001_INIT_DBG(pm8001_ha, pm8001_printk("Invalid tag"));
circularQ = &pm8001_ha->inbnd_q_tbl[0];
opc = OPC_INB_SET_PHY_PROFILE;
payload.tag = cpu_to_le32(tag);
payload.ppc_phyid = (((SAS_PHY_ANALOG_SETTINGS_PAGE & 0xF) << 8)
| (phy & 0xFF));
for (i = 0; i < length; i++)
payload.reserved[i] = cpu_to_le32(*(buf + i));
rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 0);
if (rc)
pm8001_tag_free(pm8001_ha, tag);
PM8001_INIT_DBG(pm8001_ha,
pm8001_printk("PHY %d settings applied", phy));
}
const struct pm8001_dispatch pm8001_80xx_dispatch = {
.name = "pmc80xx",
.chip_init = pm80xx_chip_init,

View File

@ -45,6 +45,7 @@
#include <asm/processor.h>
#include <linux/libata.h>
#include <linux/mutex.h>
#include <linux/ktime.h>
#include <scsi/scsi.h>
#include <scsi/scsi_host.h>
#include <scsi/scsi_device.h>
@ -4254,7 +4255,6 @@ static struct scsi_host_template pmcraid_host_template = {
.use_clustering = ENABLE_CLUSTERING,
.shost_attrs = pmcraid_host_attrs,
.proc_name = PMCRAID_DRIVER_NAME,
.use_blk_tags = 1,
};
/*
@ -5563,11 +5563,9 @@ static void pmcraid_set_timestamp(struct pmcraid_cmd *cmd)
__be32 time_stamp_len = cpu_to_be32(PMCRAID_TIMESTAMP_LEN);
struct pmcraid_ioadl_desc *ioadl = ioarcb->add_data.u.ioadl;
struct timeval tv;
__le64 timestamp;
do_gettimeofday(&tv);
timestamp = tv.tv_sec * 1000;
timestamp = ktime_get_real_seconds() * 1000;
pinstance->timestamp_data->timestamp[0] = (__u8)(timestamp);
pinstance->timestamp_data->timestamp[1] = (__u8)((timestamp) >> 8);

View File

@ -267,7 +267,6 @@ struct scsi_host_template qla2xxx_driver_template = {
.shost_attrs = qla2x00_host_attrs,
.supported_mode = MODE_INITIATOR,
.use_blk_tags = 1,
.track_queue_depth = 1,
};

Some files were not shown because too many files have changed in this diff Show More