Age | Commit message (Collapse) | Author | Files | Lines |
|
Cc: Dan Horák <dan@danny.cz>
Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
|
|
Rather than having an explicit policy use the presence of a platform
defined external interrupt handler to determine whether we should direct
the interrupt to OPAL or not. This lets us remove a pile of
comments about why the policy is necessary and the comments about why
we need to un-set it on P8+
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
|
|
Use Software Package Data Exchange (SPDX) to indicate license for each
file that is unique to skiboot.
At the same time, ensure the (C) who and years are correct.
See https://spdx.org/
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
[oliver: Added a few missing files]
Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
|
|
We have an implementation for non-FSP systems now, and we shouldn't be
calling that from code in an fsp/ directory, so move op_display() to a
platform function.
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
On some POWER8 astbmc systems an update to the SBE requires pausing at
runtime to ensure integrity of the SBE. If this is required the BMC will
set a chassis boot option IPMI flag using the OEM parameter 0x62. If
Skiboot sees this flag is set it waits until the SBE update is complete
and the flag is cleared.
Unfortunately the mystery operation that validates the SBE also leaves
it in a bad state and unable to be used for timer operations. To
workaround this the flag is checked as soon as possible (ie. when IPMI
and the console are set up), and once complete the system is rebooted.
Signed-off-by: Samuel Mendoza-Jonas <sam@mendozajonas.com>
Reviewed-by: Vasant Hegde <hegdevasant@linux.vnet.ibm.com>
Reviewed-by: Andrew Jeffery <andrew@aj.id.au>
Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
|
|
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
Segregate the BMC platform configuration into hardware and software
components. This allows population of platform default values for
hardware configuration that may no-longer be accessible by the host.
Signed-off-by: Andrew Jeffery <andrew@aj.id.au>
[stewart: fixup pci-quirk unit test]
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
On BMC machines, we have slot tables of built in PHBs, slots and devices
that are physically present in the system (such as the BMC itself). We
can use these tables to check what we *detected* against what *should*
be in the system and throw an error if they differ.
We have seen this occur a couple of times while still booting, giving the
user just an empty petitboot screen and not much else to go on. This
patch helps in that we get a skiboot error message, and at some point
in the future when we pump them up to the OS we could get a big friendly
error message telling you you're having a bad day.
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
Acked-by: Russell Currey <ruscur@russell.cc>
[stewart@linux.vnet.ibm.com: add barreleye]
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
|
An out of tree platform (p8dtu) uses a different IPMI OEM command
for IPMI_PARTIAL_ADD_ESEL. This exposed some assumptions about the BMC
implementation in our core code.
Now, with platform.bmc, each platform can dictate (or detect) the BMC
that is present. We allow it to be set at runtime rather than purely
statically in struct platform as it's possible to have differing BMC
implementations on the one machine (e.g. AMI BMC or OpenBMC).
Acked-by: Jeremy Kerr <jk@ozlabs.org>
[stewart@linux.vnet.ibm.com: remove enum, update (C) years]
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
|
Allocating BDFNs to NPU devices and associating NPU devices with PCI
devices of GPUs both rely on comparing PBCQ handles. This will fail if a
system has multiple sets of GPUs behind a single PHB.
Rework this to instead use slot locations. The following changes are
introduced:
- Groups of NPU links that connect to the same GPU are presented in the
slot table entries as st_npu_slot, using ST_LOC_NPU_GROUP
- NPU links are created with the ibm,npu-group-id property replacing the
ibm,pbcq property, which is used in BDFN allocation and GPU association
- Slot comparison is handled slightly differently for NPU devices as the
function of the BDFN is ignored, since the device number represents the
physical GPU the link is connected to
- BDFN allocation for NPU devices is now derived from the groups in the
slot table. For Garrison, the same BDFNs are generated as before.
- Association with GPU PCI devices is performed by comparing the slot
label. This means for future machines with NPUs that slot labels
are compulsory to have NVLink functionality working.
Signed-off-by: Russell Currey <ruscur@russell.cc>
Reviewed-By: Alistair Popple <alistair@popple.id.au>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
|
The Garrison workbook numbers GPUs starting from GPU1 instead of
GPU0. Update the skiboot location codes to match.
Signed-off-by: Alistair Popple <alistair@popple.id.au>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
|
Add the slot location names for the PCI and NPU slots.
Signed-off-by: Alistair Popple <alistair@popple.id.au>
Claimed-to-be-Tested-By: Abhijit Saikia <Abhijit.Saikia@in.ibm.com>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
|
The PHB slot location code ueses the ibm,phb-index property to find
slot location names. As the NPU is implemented as a different PHB type
it means the phb-index property overlaps with the other PHBs in the
system.
This patch changes the existing usage of phb-index to npu-index which
allows the phb-index property to be assigned a unique value which can
then be matched by the PHB slot location code.
Signed-off-by: Alistair Popple <alistair@popple.id.au>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
|
Garrison is the first system to support Nvlink. Eventually Hostboot
should provide these device tree bindings. In the meantime this patch
will add the required fixups to enable the Nvlinks on Garrison.
Signed-off-by: Alistair Popple <alistair@popple.id.au>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
|
Presently abort() function is not working on BMC based machine. System
hangs after abort/assert call. We have to reboot machine from BMC (IPMI
command or BMC console).
This patch introduces attention functionality for BMC based machine.
It logs eSEL event that contains OPAL version, file info and backtrace.
And calls cec_reboot... which takes care of rebooting host.
Note:
- This patch uses ipmi_queue_msg() instead of ipmi_queue_msg_sync() as
we are having some issues with sync path. This will resolved once we
sort out [1].
- This patch calls cec_reboot to reboot machine after logging eSEL event.
It queues IPMI message and bt_poll() should be working until we pass
reboot IPMI message to BMC. Hence we have while loop with time_wait_ms().
Alternatively we can use xscom_trigger_xstop().. but it will stop
immediately and eSEL logging fails.
[1] https://lists.ozlabs.org/pipermail/skiboot/2015-August/001824.html
Sample eSEL output after assert call:
------------------------------------
[hegdevasant@hegdevasant bin]$ strings fir01bmc.150820.120511.eSel.binary
BB821410
AT8335-GTA000000000000
AT8335-GTA000000000000UD
ATDESC
OPAL version : skiboot-5.1.1-44-geae3999-hegdevasant-dirty-bb31bfd
File info : core/init.c:463:0
CPU 0060 Backtrace:
S: 0000000031d83bc0 R: 000000003006086c .ipmi_terminate+0x110
S: 0000000031d83c60 R: 0000000030017f90 ._abort+0x80
S: 0000000031d83ce0 R: 0000000030017fd8 .assert_fail+0x34
S: 0000000031d83d60 R: 0000000030013dcc .load_and_boot_kernel+0x784
S: 0000000031d83e30 R: 000000003001437c .main_cpu_entry+0x57c
S: 0000000031d83f00 R: 0000000030002544 boot_entry+0x194
Signed-off-by: Vasant Hegde <hegdevasant@linux.vnet.ibm.com>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
|
Signed-off-by: Dan Horák <dan@danny.cz>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
|
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|