According to USB Audio Device 2.0 Spec, Ch4.10.1.1:
wMaxPacketSize is defined as follows:
Maximum packet size this endpoint is capable of sending or receiving
when this configuration is selected.
This is determined by the audio bandwidth constraints of the endpoint.
In current code, the wMaxPacketSize is defined as the maximum packet size
for ISO endpoint, and it will let the host reserve much more space than
it really needs, so that we can't let more endpoints work together at
one frame.
We find this issue when we try to let 4 f_uac2 gadgets work together [1]
at FS connection.
DWC3 uses bounce buffer to handle non max packet aligned OUT transfers and
the size of bounce buffer is 512 bytes. However if the host initiates OUT
transfers of size more than 512 bytes (and non max packet aligned), the
driver throws a WARN dump but still programs the TRB to receive more than
512 bytes. This will cause bounce buffer to overflow and corrupt the
adjacent memory locations which can be fatal.
Fix it by programming the TRB to receive a maximum of DWC3_EP0_BOUNCE_SIZE
(512) bytes.
Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com> Signed-off-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Users have occasionally reported that file type for some directory
entries is wrong. This mostly happened after updating libraries some
libraries. After some debugging the problem was traced down to
xfs_dir2_node_replace(). The function uses args->filetype as a file type
to store in the replaced directory entry however it also calls
xfs_da3_node_lookup_int() which will store file type of the current
directory entry in args->filetype. Thus we fail to change file type of a
directory entry to a proper type.
Fix the problem by storing new file type in a local variable before
calling xfs_da3_node_lookup_int().
Reported-by: Giacomo Comes <comes@naic.edu> Signed-off-by: Jan Kara <jack@suse.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
struct xfs_attr_leafblock contains 'entries' array which is declared
with size 1 altough it can in fact contain much more entries. Since this
array is followed by further struct members, gcc (at least in version
4.8.3) thinks that the array has the fixed size of 1 element and thus
may optimize away all accesses beyond the end of array resulting in
non-working code. This problem was only observed with userspace code in
xfsprogs, however it's better to be safe in kernel as well and have
matching kernel and xfsprogs definitions.
Signed-off-by: Jan Kara <jack@suse.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
In the dir3 data block readahead function, use the regular read
verifier to check the block's CRC and spot-check the block contents
instead of directly calling only the spot-checking routine. This
prevents corrupted directory data blocks from being read into the
kernel, which can lead to garbage ls output and directory loops (if
say one of the entries contains slashes and other junk).
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
1. The 9th bit of buf was believed to be the LSB of divisor's
exponent, but the hardware interprets it as MSB (9th bit) of the
mantissa. The exponent is actually one bit shorter and applies
to base 4, not 2 as previously believed.
2. Loop iterations doubled the exponent instead of incrementing.
3. The exponent wasn't checked for overflow.
4. The function returned requested rate instead of actual rate.
Due to issue #2, the old code deviated from the wrong formula
described in #1 and actually yielded correct rates when divisor
was lower than 4096 by using exponents of 0, 2 or 4 base-2,
interpreted as 0, 1, 2 base-4 with the 9th mantissa bit clear.
However, at 93.75 kbaud or less the rate turned out too slow
due to #2 or too fast due to #2 and #3.
I tested this patch by sending and validating 0x00,0x01,..,0xff
to an FTDI dongle at 234, 987, 2401, 9601, 31415, 115199, 250k,
500k, 750k, 1M, 1.5M, 3M+1 baud. All rates passed.
I also used pv to check speed at some rates unsupported by FTDI:
45 (the lowest possible), 2M, 4M, 5M and 6M-1. Looked sane.
Signed-off-by: Michal Pecio <michal.pecio@gmail.com> Fixes: 399aa9a75ad3 ("USB: pl2303: use divisors for unsupported baud
rates")
[johan: update summary ] Signed-off-by: Johan Hovold <johan@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The driver used usb_get_serial_data(port->serial) which compiled but resulted
in a NULL pointer being returned (and subsequently used). I did not go deeper
into this but I guess this is a regression.
Signed-off-by: Philipp Hachtmann <hachti@hachti.de> Fixes: a85796ee5149 ("USB: symbolserial: move private-data allocation to
port_probe") Acked-by: Johan Hovold <johan@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The commit dd11444327ce ("spi: dw-spi: Convert 16bit accesses to 32bit
accesses") changed all 16bit accesses in the DW_apb_ssi driver to 32bit.
This, unfortunately, breaks data register access on picoXcell, where the
DW IP needs data register accesses to be word accesses (all other
accesses appear to be OK).
This change introduces a new master variable to allow interface drivers
to specify that 16bit data transfer I/O is required. This change also
introduces the ability to set this variable via device tree bindings in
the MMIO interface driver. Both the core and the MMIO interface driver
default to the current 32bit behaviour.
Before this change, on a picoXcell pc3x3:
spi_master spi32766: interrupt_transfer: fifo overrun/underrun
m25p80 spi32766.0: error -5 reading 9f
m25p80: probe of spi32766.0 failed with error -5
After this change:
m25p80 spi32766.0: m25p40 (512 Kbytes)
Fixes: dd11444327ce ("spi: dw-spi: Convert 16bit accesses to 32bit accesses") Signed-off-by: Michael van der Westhuizen <michael@smart-africa.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
spfi_setup may be called many times by the spi framework, but
gpio_request_one can only be called once without freeing, repeatedly
calling gpio_request_one will cause an error to be thrown, which
causes the request to spi_setup to be marked as failed.
We can have a per-spi_device flag that indicates whether or not the
gpio has been requested. If the gpio has already been requested use
gpio_direction_output to set the direction of the gpio.
Fixes: 8c2c8c03cdcb ("spi: img-spfi: Control CS lines with GPIO") Signed-off-by: Sifan Naeem <sifan.naeem@imgtec.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Calling spfi_wait_all_done is not required if the transfer has timed
out before all data is transferred.
spfi_wait_all_done polls for Alldone interrupt which is triggered to
mark the transfer as complete and to indicate it is now safe to issue
a new transfer.
Fixes: 8c2c8c0 ("spi: img-spfi: Control CS lines with GPIO") Signed-off-by: Sifan Naeem <sifan.naeem@imgtec.com> Reviewed-by: Andrew Bresticker <abrestic@chromium.org> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This patch fixes a regression introduced by commit 232a5adc5199 ("spi:
bitbang: only toggle bitchanges"). The attempt to optimize writes of
consecutive bit patterns broke most of the combinations of word size
and SPI modes due to selecting the wrong bit as the MSB value.
Fixes: 232a5adc5199 (spi: bitbang: only toggle bitchanges) Signed-off-by: Lars Persson <larper@axis.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
When using reverse polarity for clock (spi-cpol) on a device
the clock line gets altered after chip-select has been asserted
resulting in an additional clock beat, which confuses hardware.
This did not show when using native-CS, as the same register
is used to control cs as well as polarity, so the changes came
into effect at the same time. Unfortunately this is not true
with gpio-cs.
To avoid this situation this patch moves the setup of polarity
(spi-cpol and spi-cpha) outside of the chip-select into
prepare_message, which is run prior to asserting chip-select.
Also fixes resetting 3-wire mode after use of rx-mode, so that
a 3-Wire sequence TX, RX, TX works as well (right now it runs
TX, RX, RX instead)
Reported-by: Noralf Tronnes <noralf@tronnes.org> Signed-off-by: Martin Sperl <kernel@martin.sperl.org> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
On multi-function JMicron SATA/PATA/AHCI devices, the PATA controller at
function 1 doesn't work if it is powered on before the SATA controller at
function 0. The result is that PATA doesn't work after resume, and we
print messages like this:
pata_jmicron 0000:02:00.1: Refused to change power state, currently in D3
irq 17: nobody cared (try booting with the "irqpoll" option)
Async resume was introduced in v3.15 by 76569faa62c4 ("PM / sleep:
Asynchronous threads for resume_noirq"). Prior to that, we powered on
the functions in order, so this problem shouldn't happen.
e6b7e41cdd8c ("ata: Disabling the async PM for JMicron chip 363/361")
solved the problem for JMicron 361 and 363 devices. With async suspend
disabled, we always power on function 0 before function 1.
Barto then reported the same problem with a JMicron 368 (see comment #57 in
the bugzilla).
Rather than extending the blacklist piecemeal, disable async suspend for
all JMicron multi-function SATA/PATA/AHCI devices.
This quirk could stay in the ahci and pata_jmicron drivers, but it's likely
the problem will occur even if pata_jmicron isn't loaded until after the
suspend/resume. Making it a PCI quirk ensures that we'll preserve the
power-on order even if the drivers aren't loaded.
Set the PCI_DEV_FLAGS_VPD_REF_F0 flag on all Intel Ethernet device
functions other than function 0, so that on multi-function devices, we will
always read VPD from function 0 instead of from the other functions.
[bhelgaas: changelog] Signed-off-by: Mark Rustad <mark.d.rustad@intel.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Alexander Duyck <alexander.h.duyck@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Add a dev_flags bit, PCI_DEV_FLAGS_VPD_REF_F0, to access VPD through
function 0 to provide VPD access on other functions. This is for hardware
devices that provide copies of the same VPD capability registers in
multiple functions. Because the kernel expects that each function has its
own registers, both the locking and the state tracking are affected by VPD
accesses to different functions.
On such devices for example, if a VPD write is performed on function 0,
*any* later attempt to read VPD from any other function of that device will
hang. This has to do with how the kernel tracks the expected value of the
F bit per function.
Concurrent accesses to different functions of the same device can not only
hang but also corrupt both read and write VPD data.
When hangs occur, typically the error message:
vpd r/w failed. This is likely a firmware bug on this device.
will be seen.
Never set this bit on function 0 or there will be an infinite recursion.
Signed-off-by: Mark Rustad <mark.d.rustad@intel.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Alexander Duyck <alexander.h.duyck@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
In fixup_ti816x_class(), we assigned "class = PCI_CLASS_MULTIMEDIA_VIDEO".
But PCI_CLASS_MULTIMEDIA_VIDEO is only the two-byte base class/sub-class
and needs to be shifted to make space for the low-order interface byte.
Shift PCI_CLASS_MULTIMEDIA_VIDEO to set the correct class code.
Fixes: 63c4408074cb ("PCI: Add quirk for setting valid class for TI816X Endpoint") Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> CC: Hemant Pedanekar <hemantp@ti.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The other ce clocks have the flag set, but ce1 doesn't, so
clk_set_rate() doesn't propagate up the tree to the ce1_src_clk.
Set the flag as this is supported.
Reported-by: Bjorn Andersson <bjorn.andersson@sonymobile.com> Tested-by: Bjorn Andersson <bjorn.andersson@sonymobile.com> Fixes: 02824653200b ("clk: qcom: Add APQ8084 Global Clock Controller support") Fixes: d33faa9ead8d ("clk: qcom: Add support for MSM8974's global clock controller (GCC)") Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Current critical clock list for pistachio enables
only mips and sys clocks by default but there are
also other clocks that are not claimed by anyone and
needs to be enabled by default.
This patch updates the critical clocks that need
to be enabled by default.
Add a separate struct to distinguish the critical clocks
as listed:
1.) core clocks:
a.) mips clock
2.) peripheral system clocks:
a.) sys clock
b.) sys_bus clock
c.) DDR clock
d.) ROM clock
Fixes: b35d7c33419c("CLK: Pistachio: Register core clocks") Reviewed-by: Andrew Bresticker <abrestic@chromium.org> Signed-off-by: Ezequiel Garcia <ezequiel.garcia@imgtec.com> Signed-off-by: Damien.Horsley <Damien.Horsley@imgtec.com> Signed-off-by: Govindraj Raja <govindraj.raja@imgtec.com> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
PLL enable callbacks are overriding PLL mode (int/frac) and
Noise reduction (on/off) settings set by the boot loader which
results in the incorrect clock rate.
PLL mode and noise reduction are defined by the DSMPD and DACPD bits
of the PLL control register. PLL .enable() callbacks enable PLL
by deasserting all power-down bits of the PLL control register,
including DSMPD and DACPD bits, which is not necessary since
these bits don't actually enable/disable PLL.
This commit fixes the problem by removing DSMPD and DACPD bits
from the "PLL enable" mask.
Fixes: 43049b0c83f17("CLK: Pistachio: Add PLL driver") Reviewed-by: Andrew Bresitcker <abrestic@chromium.org> Signed-off-by: Zdenko Pulitika <zdenko.pulitika@imgtec.com> Signed-off-by: Govindraj Raja <govindraj.raja@imgtec.com> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Commit d5e136a21b2028fb1f45143ea7112d5869bfc6c7 ("clk: samsung: Register
clk provider only after registering its all clocks", merged to v3.17-rc1)
modified a way that driver registers registers to core framework. This
change has not been applied to s5pv210 clocks driver, which has been
merged in parallel to that commit. This patch adds a missing call to
samsung_clk_of_add_provider(), so the driver is operational again.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Tomasz Figa <tomasz.figa@gmail.com> Signed-off-by: Michael Turquette <mturquette@baylibre.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The TSADC gate clock was used in Exynos4x12 DTSI for exynos-adc driver.
However TSADC is present only on Exynos4210 so on Trats2 board (with
Exynos4412 SoC) the exynos-adc driver could not be probed:
ERROR: could not get clock /adc@126C0000:adc(0)
exynos-adc 126c0000.adc: failed getting clock, err = -2
exynos-adc: probe of 126c0000.adc failed with error -2
Instead on Exynos4x12 SoCs the main clock used by Analog to Digital
Converter is located in different register and it is named in datasheet
as PCLK_ADC. Regardless of the name the purpose of this PCLK_ADC clock
is the same as purpose of TSADC from Exynos4210.
The patch adds gate clock for Exynos4x12 using the proper register so
backward compatibility is preserved. This fixes the probe of exynos-adc
driver on Exynos4x12 boards and allows accessing sensors connected to it
on Trats2 board (ntc,ncp15wb473 AP and battery thermistors).
Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com> Fixes: c63c57433003 ("ARM: dts: Add ADC's dt data to read raw data for exynos4x12") Reviewed-by: Javier Martinez Canillas <javier.martinez@collabora.co.uk> Acked-by: Tomasz Figa <tomasz.figa@gmail.com> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The dwmac ethernet controller on the rk3288 supports phys connected
via rgmii and rmii. With rgmii phys it is expected that the mac clock
is provided externally while with rmii phys the clock can be external
but also generated from the plls. In the later case it of course needs
be at 50MHz, which gets set from the dwmac_rk driver.
As most devices use a rgmii phy it never surfaced so far that the mac
clk mux, doesn't go up one lever to the pll clock in the rmii case with
internal clock generation, as it is missing the CLK_SET_RATE_PARENT flag,
and thus will not set the correct frequency in most cases.
Static analysis by cppcheck found an issue that was recently introduced by
commit 471f7707b6f0b1 ("PM / clock_ops: make __pm_clk_enable more generic")
where a return status in ret was not being initialised and garbage
being returned when ce->status >= PCE_STATUS_ERROR.
The fact that ret is not being checked by the caller and that
ret is only used internally __pm_clk_enable() to check if clk_enable()
was OK means we can ignore returning it instead turn
__pm_clk_enable() into function with a void return.
Fixes: 471f7707b6f0b1 ("PM / clock_ops: make __pm_clk_enable more generic") Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
`devpriv->ao_timer` is used while an asynchronous command is running on
the AO subdevice. It also gets modified by the subdevice's `cmdtest`
handler for checking new asynchronous commands,
`usbduxsigma_ao_cmdtest()`, which is not correct as it's allowed to
check new commands while an old command is still running. Fix it by
moving the code which sets up `devpriv->ao_timer` into the subdevice's
`cmd` handler, `usbduxsigma_ao_cmd()`.
Note that the removed code in `usbduxsigma_ao_cmdtest()` checked that
`devpriv->ao_timer` did not end up less that 1, but that could not
happen due because `cmd->scan_begin_arg` or `cmd->convert_arg` had
already been range-checked.
Also note that we tested the `high_speed` variable in the old code, but
that is currently always 0 and means that we always use "scan" timing
(`cmd->scan_begin_src == TRIG_TIMER` and `cmd->convert_src == TRIG_NOW`)
and never "convert" (individual sample) timing (`cmd->scan_begin_src ==
TRIG_FOLLOW` and `cmd->convert_src == TRIG_TIMER`). The moved code
tests `cmd->convert_src` instead to decide whether "scan" or "convert"
timing is being used, although currently only "scan" timing is
supported.
Fixes: fb1ef622e7a3 ("staging: comedi: usbduxsigma: tidy up analog output command support") Signed-off-by: Ian Abbott <abbotti@mev.co.uk> Reviewed-by: Bernd Porr <mail@berndporr.me.uk> Reviewed-by: H Hartley Sweeten <hsweeten@visionengravers.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
`devpriv->ai_timer` is used while an asynchronous command is running on
the AI subdevice. It also gets modified by the subdevice's `cmdtest`
handler for checking new asynchronous commands
(`usbduxsigma_ai_cmdtest()`), which is not correct as it's allowed to
check new commands while an old command is still running. Fix it by
moving the code which sets up `devpriv->ai_timer` and
`devpriv->ai_interval` into the subdevice's `cmd` handler,
`usbduxsigma_ai_cmd()`.
Note that the removed code in `usbduxsigma_ai_cmdtest()` checked that
`devpriv->ai_timer` did not end up less than than 1, but that could not
happen because `cmd->scan_begin_arg` had already been checked to be at
least the minimum required value (at least when `cmd->scan_begin_src ==
TRIG_TIMER`, which had also been checked to be the case).
Fixes: b986be8527c7 ("staging: comedi: usbduxsigma: tidy up analog input command support) Signed-off-by: Ian Abbott <abbotti@mev.co.uk> Reviewed-by: Bernd Porr <mail@berndporr.me.uk> Reviewed-by: H Hartley Sweeten <hsweeten@visionengravers.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The "adl_pci7x3x" driver replaced the "adl_pci7230" and "adl_pci7432"
drivers in commits 8f567c373c4b ("staging: comedi: new adl_pci7x3x
driver") and 657f77d173d3 ("staging: comedi: remove adl_pci7230 and
adl_pci7432 drivers"). Although the new driver code agrees with the
user manuals for the respective boards, digital outputs stopped working
on the PCI-7230. This has 16 digital output channels and the previous
adl_pci7230 driver shifted the 16 bit output state left by 16 bits
before writing to the hardware register. The new adl_pci7x3x driver
doesn't do that. Fix it in `adl_pci7x3x_do_insn_bits()` by checking
for the special case of the subdevice having only 16 channels and
duplicating the 16 bit output state into both halves of the 32-bit
register. That should work both for what the board actually does and
for what the user manual says it should do.
Fixes: 8f567c373c4b ("staging: comedi: new adl_pci7x3x driver") Signed-off-by: Ian Abbott <abbotti@mev.co.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
During the various CPU_ONLINE callbacks CPUn is online but not
active. Several things can go wrong at that point, depending on
the scheduling of tasks on CPU0.
The ->wake_cpu of the unparked thread is not allowed, making a call
to select_fallback_rq() necessary. Then, select_fallback_rq() cannot
find an allowed, active CPU and promptly resets the allowed CPUs, so
that the task in question ends up on CPU0.
When those unparked tasks are eventually executed, they run
immediately into a BUG:
kernel BUG at kernel/smpboot.c:135!
Just changing the order in which the online/active bits are set
(and adding some memory barriers), would solve the two issues
above. However, it would change the order of operations back to
the one before commit 6acbfb96976f ("sched: Fix hotplug vs.
set_cpus_allowed_ptr()"), thus, reintroducing that particular
problem.
Going further back into history, we have at least the following
commits touching this topic:
- commit 2baab4e90495 ("sched: Fix select_fallback_rq() vs cpu_active/cpu_online")
- commit 5fbd036b552f ("sched: Cleanup cpu_active madness")
Together, these give us the following non-working solutions:
- secondary CPU sets active before online, because active is assumed to
be a subset of online;
- secondary CPU sets online before active, because the primary CPU
assumes that an online CPU is also active;
- secondary CPU sets online and waits for primary CPU to set active,
because it might deadlock.
Commit 875ebe940d77 ("powerpc/smp: Wait until secondaries are
active & online") introduces an arch-specific solution to this
arch-independent problem.
Now, go for a more general solution without explicit waiting and
simply set active twice: once on the secondary CPU after online
was set and once on the primary CPU after online was seen.
set_cpus_allowed_ptr()")
Signed-off-by: Jan H. Schönherr <jschoenh@amazon.de> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Anton Blanchard <anton@samba.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Joerg Roedel <jroedel@suse.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matt Wilson <msw@amazon.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Fixes: 6acbfb96976f ("sched: Fix hotplug vs. set_cpus_allowed_ptr()") Link: http://lkml.kernel.org/r/1439408156-18840-1-git-send-email-jschoenh@amazon.de Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Add inverse unit conversion macro to convert from standard IIO units to
units that might be used by some devices.
Those are useful in combination with scale factors that are specified as
IIO_VAL_FRACTIONAL. Typically the denominator for those specifications will
contain the maximum raw value the sensor will generate and the numerator
the value it maps to in a specific unit. Sometimes datasheets specify those
in different units than the standard IIO units (e.g. degree/s instead of
rad/s) and so we need to do a unit conversion.
From a mathematical point of view it does not make a difference whether we
apply the unit conversion to the numerator or the inverse unit conversion
to the denominator since (x / y) / z = x / (y * z). But as the denominator
is typically a larger value and we are rounding both the numerator and
denominator to integer values using the later method gives us a better
precision (E.g. the relative error is smaller if we round 8000.3 to 8000
rather than rounding 8.3 to 8).
This is where in inverse unit conversion macros will be used.
Negative return values are not supported by iio_event_poll since
its return type is unsigned int.
Fixes: f18e7a068a0a3 ("iio: Return -ENODEV for file operations if the device has been unregistered") Signed-off-by: Cristina Opriceana <cristina.opriceana@gmail.com> Signed-off-by: Jonathan Cameron <jic23@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The novx parameter disables the vector facility but the HWCAP_S390_VXRS
bit in the ELf hardware capabilies is always set if the machine has
the vector facility. If the user space program uses the "vx" string
in the features field of /proc/cpuinfo to utilize vector instruction
it will crash if the novx kernel paramter is set.
Convert setup_hwcaps to an arch_initcall and use MACHINE_HAS_VX to
decide if the HWCAPS_S390_VXRS bit needs to be set.
Fix this error when compiling with CONFIG_SMP=n and
CONFIG_DYNAMIC_DEBUG=y:
drivers/s390/char/sclp_early.c: In function 'sclp_read_info_early':
drivers/s390/char/sclp_early.c:87:19: error: 'EBUSY' undeclared (first use in this function)
} while (rc == -EBUSY);
^
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
In I915_READ64_2x32 we attempt to read a 64bit register using 2 32bit
reads. Due to the nature of the registers we try to read in this manner,
they may increment between the two instruction (e.g. a timestamp
counter). To keep the result accurate, we repeat the read if we detect
an overflow (i.e. the upper value varies). However, some hardware is just
plain flaky and may endless loop as the the upper 32bits are not stable.
Just give up after a couple of tries and report whatever we read last.
v2: Use the most recent values when erring out on an unstable register.
Reported-by: russianneuromancer@ya.ru
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=91906 Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Jani Nikula <jani.nikula@linux.intel.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Jani Nikula <jani.nikula@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
There have been many hard to track down bugs whereby userspace forgot to
flag a write buffer and then cause graphics corruption or a hung GPU
when that buffer was later purged under memory pressure (as the buffer
appeared clean, its pages would have been evicted rather than preserved
and any changes more recent than in the backing storage would be lost).
In retrospect this is a rare optimisation against memory pressure,
already the slow path. If we always mark the buffer as dirty when
accessed by the GPU, anything not used can still be evicted cheaply
(ideal behaviour for mark-and-sweep eviction) but we do not run the risk
of corruption. For correct read serialisation, userspace still has to
notify when the GPU writes to an object. However, there are certain
situations under which userspace may wish to tell white lies to the
kernel...
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Kristian Høgsberg <krh@bitplanet.net> Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Cc: "Goel, Akash" <akash.goel@intel.co> Cc: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Jani Nikula <jani.nikula@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Just like single link MIPI panels, similarly for dual link panels, pipe
to be configured is based on the DVO port from VBT Block 2. In hardware,
Port A is mapped with Pipe A and Port C is mapped with Pipe B.
Due to some recent changes in
drm_helper_probe_single_connector_modes_merge_bits(), old custom modes
were not being pruned properly. In current kernels,
drm_mode_validate_basic() is called to sanity-check each mode in the
list. If the sanity-check passes, the mode's status gets set to to
MODE_OK. In older kernels this check was not done, so old custom modes
would still have a status of MODE_UNVERIFIED at this point, and would
therefore be pruned later in the function.
As a result of this new behavior, the list of modes for a device always
includes every custom mode ever configured for the device, with the
largest one listed first. Since desktop environments usually choose the
first preferred mode when a hotplug event is emitted, this had the
result of making it very difficult for the user to reduce the size of
the display.
The qxl driver did implement the mode_valid connector function, but it
was empty. In order to restore the old behavior where old custom modes
are pruned, we implement a proper mode_valid function for the qxl
driver. This function now checks each mode against the last configured
custom mode and the list of standard modes. If the mode doesn't match
any of these, its status is set to MODE_BAD so that it will be pruned as
expected.
Commit 92122789b2d6 ("drm/i915: preserve SSC if previously set v3")
added code to intel_modeset_gem_init to override the SSC status read
from VBT with the SSC status set by BIOS.
However, intel_modeset_gem_init is invoked *after* intel_modeset_init,
which calls intel_setup_outputs, which *modifies* SSC status by way of
intel_init_pch_refclk. So unlike advertised, intel_modeset_gem_init
doesn't preserve the SSC status set by BIOS but whatever
intel_init_pch_refclk decided on.
This is a problem on dual gpu laptops such as the MacBook Pro which
require either a handler to switch DDC lines, or the discrete gpu
to proxy DDC/AUX communication: Both the handler and the discrete
gpu may initialize after the i915 driver, and consequently, an LVDS
connector may initially seem disconnected and the SSC therefore
is disabled by intel_init_pch_refclk, but on reprobe the connector
may turn out to be connected and the SSC must then be enabled.
Due to 92122789b2d6 however, the SSC is not enabled on reprobe since
it is assumed BIOS disabled it while in fact it was disabled by
intel_init_pch_refclk.
Also, because the SSC status is preserved so late, the preserved value
only ever gets used on resume but not on panel initialization:
intel_modeset_init calls intel_init_display which indirectly calls
intel_panel_use_ssc via multiple subroutines, *before* the BIOS value
overrides the VBT value in intel_modeset_gem_init (intel_panel_use_ssc
is the sole user of dev_priv->vbt.lvds_use_ssc).
Fix this by moving the code introduced by 92122789b2d6 from
intel_modeset_gem_init to intel_modeset_init before the invocation
of intel_setup_outputs and intel_init_display.
Add a DRM_DEBUG_KMS as suggested way back by Jani:
http://lists.freedesktop.org/archives/intel-gfx/2014-June/046666.html
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=88861
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=61115 Tested-by: Paul Hordiienko <pvt.gord@gmail.com>
[MBP 6,2 2010 intel ILK + nvidia GT216 pre-retina] Tested-by: William Brown <william@blackhats.net.au>
[MBP 8,2 2011 intel SNB + amd turks pre-retina] Tested-by: Lukas Wunner <lukas@wunner.de>
[MBP 9,1 2012 intel IVB + nvidia GK107 pre-retina] Tested-by: Bruno Bierbaumer <bruno@bierbaumer.net>
[MBP 11,3 2013 intel HSW + nvidia GK107 retina -- work in progress] Fixes: 92122789b2d6 ("drm/i915: preserve SSC if previously set v3") Signed-off-by: Lukas Wunner <lukas@wunner.de> Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: Jani Nikula <jani.nikula@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
AUX addresses are 20 bits long. Send out the entire address instead of
just the low 16 bits.
Port of:
drm/radeon/atom: Send out the full AUX address
to radeon non-atom aux path
Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
We are no longer checkling the DP link status on long hpd. We used to do
that from the .hot_plug() handler, but it was removed when MST got
introduced.
If there's no userspace we now fail to retrain the link if the sink
power is toggled (or cable yanked and replugged), meaning the user is
left staring at a blank screen. With the retraining put back that should
be fixed.
Also remove the leftover comment that referred to the old retraining
from .hot_plug().
drm/i915: gen4: work around hang during hibernation
using an explicit blacklist for the GENs/BIOS vendors where the issue was
reported. Later there we had reports of the same failure on platforms not on
this list.
To my best knowledge the correct thing to do is still to put the device to PCI
D3 state during hibernation, see [1] and [2] for the reasons. This also aligns
with our future plans to unify more the runtime and system suspend/resume
paths. Since an exact blacklist seems to be impractical (multiple GENs and
BIOS vendors are affected) apply the workaround on everything pre GEN6.
Most of the time this isn't an issue since hotplugging an adaptor will
trigger a crtc mode change which in turn, causes the driver to probe
every DisplayPort for a dpcd. However, in cases where hotplugging
doesn't cause a mode change (specifically when one unplugs a monitor
from a DisplayPort connector, then plugs that same monitor back in
seconds later on the same port without any other monitors connected), we
never probe for the dpcd before starting the initial link training. What
happens from there looks like this:
- GPU has only one monitor connected. It's connected via
DisplayPort, and does not go through an adaptor of any sort.
- User unplugs DisplayPort connector from GPU.
- Change in HPD is detected by the driver, we probe every
DisplayPort for a possible connection.
- Probe the port the user originally had the monitor connected
on for it's dpcd. This fails, and we clear the first (and only
the first) byte of the dpcd to indicate we no longer have a
dpcd for this port.
- User plugs the previously disconnected monitor back into the
same DisplayPort.
- radeon_connector_hotplug() is called before everyone else,
and tries to handle the link training. Since only the first
byte of the dpcd is zeroed, the driver is able to complete
link training but does so against the wrong dpcd, causing it
to initialize the link with the wrong settings.
- Display stays blank (usually), dpcd is probed after the
initial link training, and the driver prints no obvious
messages to the log.
In theory, since only one byte of the dpcd is chopped off (specifically,
the byte that contains the revision information for DisplayPort), it's
not entirely impossible that this bug may not show on certain monitors.
For instance, the only reason this bug was visible on my ASUS PB238
monitor was due to the fact that this monitor using the enhanced framing
symbol sequence, the flag for which is ignored if the radeon driver
thinks that the DisplayPort version is below 1.1.
Signed-off-by: Stephen Chandler Paul <cpaul@redhat.com> Reviewed-by: Jerome Glisse <jglisse@redhat.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
modify_ldt() has questionable locking and does not synchronize
threads. Improve it: redesign the locking and synchronize all
threads' LDTs using an IPI on all modifications.
This will dramatically slow down modify_ldt in multithreaded
programs, but there shouldn't be any multithreaded programs that
care about modify_ldt's performance in the first place.
This fixes some fallout from the CVE-2015-5157 fixes.
Signed-off-by: Andy Lutomirski <luto@kernel.org> Reviewed-by: Borislav Petkov <bp@suse.de> Cc: Andrew Cooper <andrew.cooper3@citrix.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jan Beulich <jbeulich@suse.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sasha Levin <sasha.levin@oracle.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: security@kernel.org <security@kernel.org> Cc: xen-devel <xen-devel@lists.xen.org> Link: http://lkml.kernel.org/r/4c6978476782160600471bd865b318db34c7b628.1438291540.git.luto@kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The Sourcery CodeBench Lite 2014.05 toolchain (gcc 4.8.3, binutils
2.24.51) has a GCC which implements -fuse-ld, and it doesn't include
the gold linker, but it lacks an ld.bfd executable in its
installation. This means that passing -fuse-ld=bfd fails with:
Arguably this is a deficiency in the toolchain, but I suspect it's
commonly used enough that it's worth accommodating: just use
cc-ldoption (to cause a link attempt) instead of cc-option to test
whether we can use -fuse-ld. So -fuse-ld=bfd won't be used with this
toolchain, but the build will rightly succeed, just as it does for
toolchains which don't implement -fuse-ld (and don't use gold as the
default linker).
Note: this will change the failure mode for a corner case I was trying
to handle in d2b30cd4b722, where the toolchain defaults to the gold
linker and the BFD linker is not found in PATH, from:
Commit b253149b843f ("sched/idle/x86: Restore mwait_idle() to fix boot
hangs, to improve power savings and to improve performance") restores
mwait_idle(), but the trace_cpu_idle related calls are missing. This
causes powertop on my old desktop powered by Intel Core2 E6550 to
report zero wakeups and zero events.
In the recent x2apic cleanup I got two things really wrong:
1) The safety check in __disable_x2apic which allows the function to
be called unconditionally is backwards. The check is there to
prevent access to the apic MSR in case that the machine has no
apic. Though right now it returns if the machine has an apic and
therefor the disabling of x2apic is never invoked.
2) x2apic_disable() sets x2apic_mode to 0 after registering the local
apic. That's wrong, because register_lapic_address() checks x2apic
mode and therefor takes the wrong code path.
This results in boot failures on machines with x2apic preenabled by
BIOS and can also lead to an fatal MSR access on machines without
apic.
The solutions are simple:
1) Correct the sanity check for apic availability
2) Clear x2apic_mode _before_ calling register_lapic_address()
Fixes: 659006bf3ae3 'x86/x2apic: Split enable and setup function' Reported-and-tested-by: Javier Monteagudo <javiermon@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://bugzilla.redhat.com/show_bug.cgi?id=1224764 Cc: Laura Abbott <labbott@redhat.com> Cc: Jiang Liu <jiang.liu@linux.intel.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Borislav Petkov <bp@alien8.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Since commit feb44f1f7a4ac299d1ab1c3606860e70b9b89d69 (x86/xen:
Provide a "Xen PV" APIC driver to support >255 VCPUs) Xen guests need
a full APIC driver and thus should depend on X86_LOCAL_APIC.
This fixes an i386 build failure with !SMP && !CONFIG_X86_UP_APIC by
disabling Xen support in this configuration.
Users needing Xen support in a non-SMP i386 kernel will need to enable
CONFIG_X86_UP_APIC.
Signed-off-by: David Vrabel <david.vrabel@citrix.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Commit d795ef9aa831 ("arm64: perf: don't warn about missing
interrupt-affinity property for PPIs") added a check for PPIs so that
we avoid parsing the interrupt-affinity property for these naturally
affine interrupts.
Unfortunately, this check can trigger an early (successful) return and
we will not assign the value of cpu_pmu->plat_device. This patch fixes
the issue.
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Cc: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
When injecting a fault into a misbehaving 32bit guest, it seems
rather idiotic to also inject a 64bit fault that is only going
to corrupt the guest state. This leads to a situation where we
perform an illegal exception return at EL2 causing the host
to crash instead of killing the guest.
Just fix the stupid bug that has been there from day 1.
Reported-by: Russell King <rmk+kernel@arm.linux.org.uk> Tested-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
We added changes in fnic driver patch 1.6.0.16 to acquire
io_req_lock in fnic_queuecommand() before issuing I/O so that io completion
is serialized. But when releasing the lock we check for the I/O flag and
this could be modified if IO abort occurs before I/O completion. In this case
we wont release the lock and causes deadlock in some scenerios. Using the
local variable to check the IO lock status will resolve the problem.
Fixes: 41df7b02db82cf6c14f094757bac3830d10a827f Signed-off-by: Hiral Shah <hishah@cisco.com> Signed-off-by: Sesidhar Baddela <sebaddel@cisco.com> Signed-off-by: Anil Chintalapati <achintal@cisco.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: James Bottomley <JBottomley@Odin.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The Crucial M500 is known to have issues with queued TRIM commands, the
factory recertified SSDs use a different model number naming convention
which causes them to get ignored by the blacklist.
The new naming convention boils down to: s/Crucial_/FC/
The CAN FD data bittiming constants are provided via netlink only when there
are valid CAN FD constants available in priv->data_bittiming_const.
Due to the indirection of pointer assignments in the peak_usb driver the
priv->data_bittiming_const never becomes NULL - not even for non-FD adapters.
The data_bittiming_const points to zero'ed data which leads to this result
when running 'ip -details link show can0':
35: can0: <NOARP,ECHO> mtu 16 qdisc noop state DOWN mode DEFAULT group default qlen 10
link/can promiscuity 0
can state STOPPED restart-ms 0
pcan_usb: tseg1 1..16 tseg2 1..8 sjw 1..4 brp 1..64 brp-inc 1
: dtseg1 0..0 dtseg2 0..0 dsjw 1..0 dbrp 0..0 dbrp-inc 0 <== BROKEN!
clock 8000000
This patch changes the struct peak_usb_adapter::bittiming_const and struct
peak_usb_adapter::data_bittiming_const to pointers to fix the assignemnt
problems.
Reported-by: Oliver Hartkopp <socketcan@hartkopp.net> Tested-by: Oliver Hartkopp <socketcan@hartkopp.net> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The routines in scsi_rpm.c assume that if a runtime-PM callback is
invoked for a SCSI device, it can only mean that the device's driver
has asked the block layer to handle the runtime power management (by
calling blk_pm_runtime_init(), which among other things sets q->dev).
However, this assumption turns out to be wrong for things like the ses
driver. Normally ses devices are not allowed to do runtime PM, but
userspace can override this setting. If this happens, the kernel gets
a NULL pointer dereference when blk_post_runtime_resume() tries to use
the uninitialized q->dev pointer.
This patch fixes the problem by calling the block layer's runtime-PM
routines only if the device's driver really does have a runtime-PM
callback routine. Since ses doesn't define any such callbacks, the
crash won't occur.
This fixes Bugzilla #101371.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Reported-by: Stanisław Pitucha <viraptor@gmail.com> Reported-by: Ilan Cohen <ilanco@gmail.com> Tested-by: Ilan Cohen <ilanco@gmail.com> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: James Bottomley <JBottomley@Odin.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This helper is required for irq chips which do not implement a
irq_set_type callback and need to call down the irq domain hierarchy
for the actual trigger type change.
This helper is required to fix further wreckage caused by the
conversion of TI OMAP to hierarchical irq domains and therefor tagged
for stable.
irq_chip_retrigger_hierarchy() returns -ENOSYS if it was not able to
find at least one .irq_retrigger() callback implemented in the IRQ
domain hierarchy.
That's wrong, because check_irq_resend() expects a 0 return value from
the callback in case that the hardware assisted resend was not
possible. If the return value is non zero the core code assumes
hardware resend success and the software resend is not invoked.
This results in lost interrupts on platforms where none of the parent
irq chips in the hierarchy implements the retrigger callback.
This is observable on TI OMAP, where the hierarchy is:
ARM GIC <- OMAP wakeupgen <- TI Crossbar
Return 0 instead so the software resend mechanism gets invoked.
The conversion of the wakeupgen irqchip to hierarchical irq domains
failed to provide a mechanism to properly set the trigger type of an
interrupt.
The wakeupgen irq chip itself has no mechanism and therefor no
irq_set_type() callback. The code before the conversion relayed the
trigger configuration directly to the underlying GIC.
Restore the correct behaviour by setting the wakeupgen irq_set_type
callback to irq_chip_set_type_parent(). This propagates the
set_trigger() call to the underlying GIC irqchip.
The TI crossbar irqchip doesn't provides any facility to configure the
wakeup sources, but the conversion to hierarchical irqdomains set the
irq_set_wake callback to irq_chip_set_wake_parent. The parent chip
(OMAP wakeupgen) has no irq_set_wake function either so the call will
fail with -ENOSYS. As a result the irq_set_wake() call in the resume
path will trigger an 'Unbalanced wake disable' warning.
Before the conversion the GIC irqchip was the top level irqchip and
correctly flagged with IRQCHIP_SKIP_SET_WAKE.
Restore the correct behaviour by removing the irq_set_type callback
from the crossbar irqchip and set the IRQCHIP_SKIP_SET_WAKE flag which
lets the irq_set_irq_wake() call from the driver succeed.
The ARM GIC requires that all interrupts which are not used as a
wakeup source have to be masked during suspend.
The conversion of the crossbar irqchip to hierarchical irq domains
failed to mark the crossbar irqchip with the IRQCHIP_MASK_ON_SUSPEND
flag and therefor broke the suspend requirement of the GIC.
Before the conversion the flags were visible because the GIC was the
top level irqchip. After the conversion the crossbar irqchip is the
top level irq chip whose flags are evaluated in suspend_device_irq().
As the flag is not set the masking of the non-wakeup irqs is not
invoked which breaks suspend.
Add the IRQCHIP_MASK_ON_SUSPEND flag to the crossbar irqchip, so the
GIC interrupts get masked properly.
The conversion of the crossbar irqchip to hierarchical irq domains
failed to provide a mechanism to properly set the trigger type of an
interrupt.
The crossbar irq chip itself has no mechanism and therefor no
irq_set_type() callback. The code before the conversion relayed the
trigger configuration directly to the underlying GIC.
Restore the correct behaviour by setting the crossbar irq_set_type
callback to irq_chip_set_type_parent(). This propagates the
set_trigger() call to the underlying GIC irqchip.
Some use of those functions were providing unitialized values to those
functions. Notably, when reading 0 bytes from an empty file on a 9P
filesystem, the return code of read() was not 0.
Everytime we use the logical context with execlists it becomes dirty (as
the hardware will write the new register values afterwards, as well as
the GPU state that will be used). We need to then flag the context as
dirty everytime since after a swap-out/swap-in cycle the dirty flag will
be cleared, and a further swap-out cycle will then loose the most recent
GPU state.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Jani Nikula <jani.nikula@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Meelis and Helge reported that 3a9ad0b4fdcd ("PCI: Add pci_bus_addr_t")
caused HPMCs on A500 and hangs on rp5470.
PA-RISC does not set ARCH_DMA_ADDR_T_64BIT, even for 64-bit kernels, so
prior to 3a9ad0b4fdcd, we always used 32-bit PCI addresses. After 3a9ad0b4fdcd, we do use 64-bit PCI addresses in 64-bit kernels, and
apparently there's some PA-RISC problem related to them.
Make sure all non-READ SCSI commands get targ_xfer_tag initialized
to 0xffffffff, not just WRITEs.
Double-free of a TUR cmd object occurs under the following scenario:
1. TUR received (targ_xfer_tag is uninitialized and left at 0)
2. TUR status sent
3. First unsolicited NOPIN is sent to initiator (gets targ_xfer_tag of 0)
4. NOPOUT for NOPIN (with TTT=0) arrives
- its ExpStatSN acks TUR status, TUR is queued for removal
- LIO tries to find NOPIN with TTT=0, but finds the same TUR instead,
TUR is queued for removal for the 2nd time
At the last iteration of the loop, j may equal zero and thus
tp_list[j - 1] causes an invalid read.
Change the logic of the loop so that j - 1 is always >= 0.
After a for-loop was replaced by list_for_each_entry, see
Commit bbbc7e8502c9 ("ALSA: hda - Allocate hda_pcm objects dynamically"),
Commit 751e2216899c ("ALSA: hda: fix possible null dereference"),
a possible NULL pointer dereference has been introduced; this patch adds
the NULL check on pcm->pcm, while leaving a potentially superfluous
check on pcm itself untouched.
The widget power-saving code tries to turn up/down the power of each
widget in the I/O paths that are modified at each jack plug/unplug.
The recent report revealed that the power activation leaves some
widgets unpowered after plugging. This is because
snd_hda_activate_path() turns on path->active flag at the end of the
function while the path power management is done before that. Then
it's regarded as if nothing is active, and the driver turns off the
power.
The fix is simply to set the flag at the beginning of the function,
before trying to power up.
The is_active_nid_for_any() function in the generic parser is supposed
to check all connections from/to the given widget, but the current
code checks only the first input connection (index = 0).
This patch corrects the code to check all inputs by passing -1 to
index argument.
On shutdown/reboot of CX20722, first shut down all EAPDs, then
shut down the afg node to D3.
Failure to do so can lead to spurious noises from the internal speaker
directly after reboot (and before the codec is reinitialized again, i e
in BIOS setup or GRUB menus).
The fix for deadlock in PM in commit [1ee23fe07ee8: ALSA: usb-audio:
Fix deadlocks at resuming] introduced a new check of in_pm flag.
However, the brainless patch author evaluated it in a wrong way
(logical AND instead of logical OR), thus usb_autopm_get_interface()
is wrongly called at probing, leading to unbalance of runtime PM
refcount.
This patch fixes it by correcting the logic.
Reported-by: Hans Yang <hansy@nvidia.com> Fixes: 1ee23fe07ee8 ('ALSA: usb-audio: Fix deadlocks at resuming') Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This should help to fix the following issue in Docker:
https://github.com/opencontainers/runc/issues/133
In some conditions, a Docker container needs to be started twice in
order to work.
As implemented, ACS-4 sense reporting for ATA devices bypasses error
diagnosis and handling in libata degrading EH behavior significantly.
Revert the related changes for now.
As implemented, ACS-4 sense reporting for ATA devices bypasses error
diagnosis and handling in libata degrading EH behavior significantly.
Revert the related changes for now.
ATA_ID_COMMAND_SET_3/4 constants are not reverted as they're used by
later changes.
As implemented, ACS-4 sense reporting for ATA devices bypasses error
diagnosis and handling in libata degrading EH behavior significantly.
Revert the related changes for now.
Commit 000851119e80 changed sha256/512 update functions to
pass more data to nx_build_sg_list(), which ends with
sg list overflows and usually with update functions failing
for data larger than max_sg_len * NX_PAGE_SIZE.
This happens because:
- both "total" and "to_process" are updated, which leads to
"to_process" getting overflowed for some data lengths
For example:
In first iteration "total" is 50, and let's assume "to_process"
is 30 due to sg limits. At the end of first iteration "total" is
set to 20. At start of 2nd iteration "to_process" overflows on:
to_process = total - to_process;
- "in_sg" is not reset to nx_ctx->in_sg after each iteration
- nx_build_sg_list() is hitting overflow because the amount of data
passed to it would require more than sgmax elements
- as consequence of previous item, data stored in overflowed sg list
may no longer be aligned to SHA*_BLOCK_SIZE
This patch changes sha256/512 update functions so that "to_process"
respects sg limits and never tries to pass more data to
nx_build_sg_list() to avoid overflows. "to_process" is calculated
as minimum of "total" and sg limits at start of every iteration.
Fixes: 000851119e80 ("crypto: nx - Fix SHA concurrence issue and sg
limit bounds") Signed-off-by: Jan Stancek <jstancek@redhat.com> Cc: Leonidas Da Silva Barbosa <leosilva@linux.vnet.ibm.com> Cc: Marcelo Henrique Cerri <mhcerri@linux.vnet.ibm.com> Cc: Fionnuala Gunter <fin@linux.vnet.ibm.com> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Commit bcdb247c6b6a ("sd: Limit transfer length") clamped the maximum
size of an I/O request to the MAXIMUM TRANSFER LENGTH field in the BLOCK
LIMITS VPD. This had the unfortunate effect of also limiting the maximum
size of non-filesystem requests sent to the device through sg/bsg.
Avoid using blk_queue_max_hw_sectors() and set the max_sectors queue
limit directly.
Also update the comment in blk_limits_max_hw_sectors() to clarify that
max_hw_sectors defines the limit for the I/O controller only.
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Reported-by: Brian King <brking@linux.vnet.ibm.com> Tested-by: Brian King <brking@linux.vnet.ibm.com> Signed-off-by: James Bottomley <JBottomley@Odin.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>