aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2024-02-05docs/about: Deprecate the old "power5+" and "power7+" CPU namesThomas Huth1-0/+9
For consistency we should drop the names with a "+" in it in the long run. Message-ID: <20240117141054.73841-3-thuth@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
2024-02-05target/ppc/cpu-models: Rename power5+ and power7+ for new QOM naming rulesThomas Huth3-10/+8
The character "+" is now forbidden in QOM device names (see commit b447378e1217 - "Limit type names to alphanumerical and some few special characters"). For the "power5+" and "power7+" CPU names, there is currently a hack in type_name_is_valid() to still allow them for compatibility reasons. However, there is a much nicer solution for this: Simply use aliases! This way we can still support the old names without the need for the ugly hack in type_name_is_valid(). Message-ID: <20240117141054.73841-2-thuth@redhat.com> Reviewed-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
2024-02-05hw/scsi/lsi53c895a: add missing decrement of reentrancy counterSven Schnelle1-0/+1
When the maximum count of SCRIPTS instructions is reached, the code stops execution and returns, but fails to decrement the reentrancy counter. This effectively renders the SCSI controller unusable because on next entry the reentrancy counter is still above the limit. This bug was seen on HP-UX 10.20 which seems to trigger SCRIPTS loops. Fixes: b987718bbb ("hw/scsi/lsi53c895a: Fix reentrancy issues in the LSI controller (CVE-2023-0330)") Signed-off-by: Sven Schnelle <svens@stackframe.org> Message-ID: <20240128202214.2644768-1-svens@stackframe.org> Reviewed-by: Thomas Huth <thuth@redhat.com> Tested-by: Helge Deller <deller@gmx.de> Signed-off-by: Thomas Huth <thuth@redhat.com>
2024-02-05migration/multifd: Fix MultiFDSendParams.packet_num racePeter Xu2-24/+34
As reported correctly by Fabiano [1] (while per Fabiano, it sourced back to Elena's initial report in Oct 2023), MultiFDSendParams.packet_num is buggy to be assigned and stored. Consider two consequent operations of: (1) queue a job into multifd send thread X, then (2) queue another sync request to the same send thread X. Then the MultiFDSendParams.packet_num will be assigned twice, and the first assignment can get lost already. To avoid that, we move the packet_num assignment from p->packet_num into where the thread will fill in the packet. Use atomic operations to protect the field, making sure there's no race. Note that atomic fetch_add() may not be good for scaling purposes, however multifd should be fine as number of threads should normally not go beyond 16 threads. Let's leave that concern for later but fix the issue first. There's also a trick on how to make it always work even on 32 bit hosts for uint64_t packet number. Switching to uintptr_t as of now to simply the case. It will cause packet number to overflow easier on 32 bit, but that shouldn't be a major concern for now as 32 bit systems is not the major audience for any performance concerns like what multifd wants to address. We also need to move multifd_send_state definition upper, so that multifd_send_fill_packet() can reference it. [1] https://lore.kernel.org/r/87o7d1jlu5.fsf@suse.de Reported-by: Elena Ufimtseva <elena.ufimtseva@oracle.com> Reviewed-by: Fabiano Rosas <farosas@suse.de> Link: https://lore.kernel.org/r/20240202102857.110210-23-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com>
2024-02-05migration/multifd: Stick with send/recv on function namesPeter Xu3-16/+16
Most of the multifd code uses send/recv to represent the two sides, but some rare cases use save/load. Since send/recv is the majority, replacing the save/load use cases to use send/recv globally. Now we reach a consensus on the naming. Reviewed-by: Fabiano Rosas <farosas@suse.de> Link: https://lore.kernel.org/r/20240202102857.110210-22-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com>
2024-02-05migration/multifd: Cleanup multifd_load_cleanup()Peter Xu1-22/+30
Use similar logic to cleanup the recv side. Note that multifd_recv_terminate_threads() may need some similar rework like the sender side, but let's leave that for later. Reviewed-by: Fabiano Rosas <farosas@suse.de> Link: https://lore.kernel.org/r/20240202102857.110210-21-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com>
2024-02-05migration/multifd: Cleanup multifd_save_cleanup()Peter Xu1-32/+59
Shrink the function by moving relevant works into helpers: move the thread join()s into multifd_send_terminate_threads(), then create two more helpers to cover channel/state cleanups. Add a TODO entry for the thread terminate process because p->running is still buggy. We need to fix it at some point but not yet covered. Suggested-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Fabiano Rosas <farosas@suse.de> Link: https://lore.kernel.org/r/20240202102857.110210-20-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com>
2024-02-05migration/multifd: Rewrite multifd_queue_page()Peter Xu1-19/+37
The current multifd_queue_page() is not easy to read and follow. It is not good with a few reasons: - No helper at all to show what exactly does a condition mean; in short, readability is low. - Rely on pages->ramblock being cleared to detect an empty queue. It's slightly an overload of the ramblock pointer, per Fabiano [1], which I also agree. - Contains a self recursion, even if not necessary.. Rewrite this function. We add some comments to make it even clearer on what it does. [1] https://lore.kernel.org/r/87wmrpjzew.fsf@suse.de Reviewed-by: Fabiano Rosas <farosas@suse.de> Link: https://lore.kernel.org/r/20240202102857.110210-19-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com>
2024-02-05migration/multifd: Change retval of multifd_send_pages()Peter Xu1-7/+8
Using int is an overkill when there're only two options. Change it to a boolean. Reviewed-by: Fabiano Rosas <farosas@suse.de> Link: https://lore.kernel.org/r/20240202102857.110210-18-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com>
2024-02-05migration/multifd: Change retval of multifd_queue_page()Peter Xu3-6/+7
Using int is an overkill when there're only two options. Change it to a boolean. Reviewed-by: Fabiano Rosas <farosas@suse.de> Link: https://lore.kernel.org/r/20240202102857.110210-17-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com>
2024-02-05migration/multifd: Split multifd_send_terminate_threads()Peter Xu2-10/+19
Split multifd_send_terminate_threads() into two functions: - multifd_send_set_error(): used when an error happened on the sender side, set error and quit state only - multifd_send_terminate_threads(): used only by the main thread to kick all multifd send threads out of sleep, for the last recycling. Use multifd_send_set_error() in the three old call sites where only the error will be set. Use multifd_send_terminate_threads() in the last one where the main thread will kick the multifd threads at last in multifd_save_cleanup(). Both helpers will need to set quitting=1. Suggested-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Fabiano Rosas <farosas@suse.de> Link: https://lore.kernel.org/r/20240202102857.110210-16-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com>
2024-02-05migration/multifd: Forbid spurious wakeupsPeter Xu1-4/+3
Now multifd's logic is designed to have no spurious wakeup. I still remember a talk to Juan and he seems to agree we should drop it now, and if my memory was right it was there because multifd used to hit that when still debugging. Let's drop it and see what can explode; as long as it's not reaching soft-freeze. Reviewed-by: Fabiano Rosas <farosas@suse.de> Link: https://lore.kernel.org/r/20240202102857.110210-15-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com>
2024-02-05migration/multifd: Move header prepare/fill into send_prepare()Peter Xu4-33/+37
This patch redefines the interfacing of ->send_prepare(). It further simplifies multifd_send_thread() especially on zero copy. Now with the new interface, we require the hook to do all the work for preparing the IOVs to send. After it's completed, the IOVs should be ready to be dumped into the specific multifd QIOChannel later. So now the API looks like: p->pages -----------> send_prepare() -------------> IOVs This also prepares for the case where the input can be extended to even not any p->pages. But that's for later. This patch will achieve similar goal of what Fabiano used to propose here: https://lore.kernel.org/r/20240126221943.26628-1-farosas@suse.de However the send() interface may not be necessary. I'm boldly attaching a "Co-developed-by" for Fabiano. Co-developed-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Fabiano Rosas <farosas@suse.de> Link: https://lore.kernel.org/r/20240202102857.110210-14-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com>
2024-02-05migration/multifd: multifd_send_prepare_header()Peter Xu2-8/+16
Introduce a helper multifd_send_prepare_header() to setup the header packet for multifd sender. It's fine to setup the IOV[0] _before_ send_prepare() because the packet buffer is already ready, even if the content is to be filled in. With this helper, we can already slightly clean up the zero copy path. Note that I explicitly put it into multifd.h, because I want it inlined directly into multifd*.c where necessary later. Reviewed-by: Fabiano Rosas <farosas@suse.de> Link: https://lore.kernel.org/r/20240202102857.110210-13-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com>
2024-02-05migration/multifd: Move trace_multifd_send|recv()Peter Xu1-5/+6
Move them into fill/unfill of packets. With that, we can further cleanup the send/recv thread procedure, and remove one more temp var. Reviewed-by: Fabiano Rosas <farosas@suse.de> Link: https://lore.kernel.org/r/20240202102857.110210-12-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com>
2024-02-05migration/multifd: Move total_normal_pages accountingPeter Xu1-2/+2
Just like the previous patch, move the accounting for total_normal_pages on both src/dst sides into the packet fill/unfill procedures. Reviewed-by: Fabiano Rosas <farosas@suse.de> Link: https://lore.kernel.org/r/20240202102857.110210-11-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com>
2024-02-05migration/multifd: Rename p->num_packets and clean it upPeter Xu2-11/+8
This field, no matter whether on src or dest, is only used for debugging purpose. They can even be removed already, unless it still more or less provide some accounting on "how many packets are sent/recved for this thread". The other more important one is called packet_num, which is embeded in the multifd packet headers (MultiFDPacket_t). So let's keep them for now, but make them much easier to understand, by doing below: - Rename both of them to packets_sent / packets_recved, the old name (num_packets) are waaay too confusing when we already have MultiFDPacket_t.packets_num. - Avoid worrying on the "initial packet": we know we will send it, that's good enough. The accounting won't matter a great deal to start with 0 or with 1. - Move them to where we send/recv the packets. They're: - multifd_send_fill_packet() for senders. - multifd_recv_unfill_packet() for receivers. Reviewed-by: Fabiano Rosas <farosas@suse.de> Link: https://lore.kernel.org/r/20240202102857.110210-10-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com>
2024-02-05migration/multifd: Drop pages->num check in sender threadPeter Xu1-6/+7
Now with a split SYNC handler, we always have pages->num set for pending_job==true. Assert it instead. Reviewed-by: Fabiano Rosas <farosas@suse.de> Link: https://lore.kernel.org/r/20240202102857.110210-9-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com>
2024-02-05migration/multifd: Simplify locking in sender threadPeter Xu1-7/+16
The sender thread will yield the p->mutex before IO starts, trying to not block the requester thread. This may be unnecessary lock optimizations, because the requester can already read pending_job safely even without the lock, because the requester is currently the only one who can assign a task. Drop that lock complication on both sides: (1) in the sender thread, always take the mutex until job done (2) in the requester thread, check pending_job clear lockless Reviewed-by: Fabiano Rosas <farosas@suse.de> Link: https://lore.kernel.org/r/20240202102857.110210-8-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com>
2024-02-05migration/multifd: Separate SYNC request with normal jobsPeter Xu2-16/+36
Multifd provide a threaded model for processing jobs. On sender side, there can be two kinds of job: (1) a list of pages to send, or (2) a sync request. The sync request is a very special kind of job. It never contains a page array, but only a multifd packet telling the dest side to synchronize with sent pages. Before this patch, both requests use the pending_job field, no matter what the request is, it will boost pending_job, while multifd sender thread will decrement it after it finishes one job. However this should be racy, because SYNC is special in that it needs to set p->flags with MULTIFD_FLAG_SYNC, showing that this is a sync request. Consider a sequence of operations where: - migration thread enqueue a job to send some pages, pending_job++ (0->1) - [...before the selected multifd sender thread wakes up...] - migration thread enqueue another job to sync, pending_job++ (1->2), setup p->flags=MULTIFD_FLAG_SYNC - multifd sender thread wakes up, found pending_job==2 - send the 1st packet with MULTIFD_FLAG_SYNC and list of pages - send the 2nd packet with flags==0 and no pages This is not expected, because MULTIFD_FLAG_SYNC should hopefully be done after all the pages are received. Meanwhile, the 2nd packet will be completely useless, which contains zero information. I didn't verify above, but I think this issue is still benign in that at least on the recv side we always receive pages before handling MULTIFD_FLAG_SYNC. However that's not always guaranteed and just tricky. One other reason I want to separate it is using p->flags to communicate between the two threads is also not clearly defined, it's very hard to read and understand why accessing p->flags is always safe; see the current impl of multifd_send_thread() where we tried to cache only p->flags. It doesn't need to be that complicated. This patch introduces pending_sync, a separate flag just to show that the requester needs a sync. Alongside, we remove the tricky caching of p->flags now because after this patch p->flags should only be used by multifd sender thread now, which will be crystal clear. So it is always thread safe to access p->flags. With that, we can also safely convert the pending_job into a boolean, because we don't support >1 pending jobs anyway. Always use atomic ops to access both flags to make sure no cache effect. When at it, drop the initial setting of "pending_job = 0" because it's always allocated using g_new0(). Reviewed-by: Fabiano Rosas <farosas@suse.de> Link: https://lore.kernel.org/r/20240202102857.110210-7-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com>
2024-02-05migration/multifd: Drop MultiFDSendParams.normal[] arrayPeter Xu4-30/+21
This array is redundant when p->pages exists. Now we extended the life of p->pages to the whole period where pending_job is set, it should be safe to always use p->pages->offset[] rather than p->normal[]. Drop the array. Alongside, the normal_num is also redundant, which is the same to p->pages->num. This doesn't apply to recv side, because there's no extra buffering on recv side, so p->normal[] array is still needed. Reviewed-by: Fabiano Rosas <farosas@suse.de> Link: https://lore.kernel.org/r/20240202102857.110210-6-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com>
2024-02-05migration/multifd: Postpone reset of MultiFDPages_tPeter Xu1-4/+14
Now we reset MultiFDPages_t object in the multifd sender thread in the middle of the sending job. That's not necessary, because the "*pages" struct will not be reused anyway until pending_job is cleared. Move that to the end after the job is completed, provide a helper to reset a "*pages" object. Use that same helper when free the object too. This prepares us to keep using p->pages in the follow up patches, where we may drop p->normal[]. Reviewed-by: Fabiano Rosas <farosas@suse.de> Link: https://lore.kernel.org/r/20240202102857.110210-5-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com>
2024-02-05migration/multifd: Drop MultiFDSendParams.quit, cleanup error pathsPeter Xu2-54/+33
Multifd send side has two fields to indicate error quits: - MultiFDSendParams.quit - &multifd_send_state->exiting Merge them into the global one. The replacement is done by changing all p->quit checks into the global var check. The global check doesn't need any lock. A few more things done on top of this altogether: - multifd_send_terminate_threads() Moving the xchg() of &multifd_send_state->exiting upper, so as to cover the tracepoint, migrate_set_error() and migrate_set_state(). - multifd_send_sync_main() In the 2nd loop, add one more check over the global var to make sure we don't keep the looping if QEMU already decided to quit. - multifd_tls_outgoing_handshake() Use multifd_send_terminate_threads() to set the error state. That has a benefit of updating MigrationState.error to that error too, so we can persist that 1st error we hit in that specific channel. - multifd_new_send_channel_async() Take similar approach like above, drop the migrate_set_error() because multifd_send_terminate_threads() already covers that. Unwrap the helper multifd_new_send_channel_cleanup() along the way; not really needed. Reviewed-by: Fabiano Rosas <farosas@suse.de> Link: https://lore.kernel.org/r/20240202102857.110210-4-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com>
2024-02-05migration/multifd: multifd_send_kick_main()Peter Xu1-6/+15
When a multifd sender thread hit errors, it always needs to kick the main thread by kicking all the semaphores that it can be waiting upon. Provide a helper for it and deduplicate the code. Reviewed-by: Fabiano Rosas <farosas@suse.de> Link: https://lore.kernel.org/r/20240202102857.110210-3-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com>
2024-02-05migration/multifd: Drop stale comment for multifd zero copyPeter Xu1-11/+0
We've already done that with multifd_flush_after_each_section, for multifd in general. Drop the stale "TODO-like" comment. Reviewed-by: Fabiano Rosas <farosas@suse.de> Link: https://lore.kernel.org/r/20240202102857.110210-2-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com>
2024-02-05migration: prevent migration when VM has poisoned memoryWilliam Roche4-0/+28
A memory page poisoned from the hypervisor level is no longer readable. The migration of a VM will crash Qemu when it tries to read the memory address space and stumbles on the poisoned page with a similar stack trace: Program terminated with signal SIGBUS, Bus error. #0 _mm256_loadu_si256 #1 buffer_zero_avx2 #2 select_accel_fn #3 buffer_is_zero #4 save_zero_page #5 ram_save_target_page_legacy #6 ram_save_host_page #7 ram_find_and_save_block #8 ram_save_iterate #9 qemu_savevm_state_iterate #10 migration_iteration_run #11 migration_thread #12 qemu_thread_start To avoid this VM crash during the migration, prevent the migration when a known hardware poison exists on the VM. Signed-off-by: William Roche <william.roche@oracle.com> Link: https://lore.kernel.org/r/20240130190640.139364-2-william.roche@oracle.com Signed-off-by: Peter Xu <peterx@redhat.com>
2024-02-04hv-balloon: use get_min_alignment() to express 32 GiB alignmentDavid Hildenbrand1-16/+21
Let's implement the get_min_alignment() callback for memory devices, and copy for the device memory region the alignment of the host memory region. This mimics what virtio-mem does, and allows for re-introducing proper alignment checks for the memory region size (where we don't care about additional device requirements) in memory device core. Message-ID: <20240117135554.787344-2-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2024-02-03tcg/s390x: Add TCG_CT_CONST_CMPRichard Henderson3-21/+58
Better constraint for tcg_out_cmp, based on the comparison. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-02-03tcg/s390x: Split constraint A into J+URichard Henderson3-23/+23
Signed 33-bit == signed 32-bit + unsigned 32-bit. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-02-03tcg/ppc: Support TCG_COND_TST{EQ,NE}Richard Henderson2-9/+115
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-02-03tcg/ppc: Add TCG_CT_CONST_CMPRichard Henderson3-10/+44
Better constraint for tcg_out_cmp, based on the comparison. We can't yet remove the fallback to load constants into a scratch because of tcg_out_cmp2, but that path should not be as frequent. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-02-03tcg/ppc: Tidy up tcg_target_const_matchRichard Henderson1-11/+16
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-02-03tcg/ppc: Use cr0 in tcg_to_bc and tcg_to_iselRichard Henderson1-34/+34
Using cr0 means we could choose to use rc=1 to compute the condition. Adjust the tables and tcg_out_cmp that feeds them. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-02-03tcg/ppc: Sink tcg_to_bc usage into tcg_out_bcRichard Henderson1-11/+17
Rename the current tcg_out_bc function to tcg_out_bc_lab, and create a new function that takes an integer displacement + link. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-02-03tcg/sparc64: Support TCG_COND_TST{EQ,NE}Richard Henderson2-3/+15
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-02-03tcg/sparc64: Pass TCGCond to tcg_out_cmpRichard Henderson1-10/+11
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-02-03tcg/sparc64: Hoist read of tcg_cond_to_rcondRichard Henderson1-11/+14
Use a non-zero value here (an illegal encoding) as a better condition than is_unsigned_cond for when MOVR/BPR is usable. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-02-03tcg/i386: Use TEST r,r to test 8/16/32 bitsPaolo Bonzini1-0/+17
Just like when testing against the sign bits, TEST r,r can be used when the immediate is 0xff, 0xff00, 0xffff, 0xffffffff. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-02-03tcg/i386: Improve TSTNE/TESTEQ vs powers of twoRichard Henderson3-8/+53
Use "test x,x" when the bit is one of the 4 sign bits. Use "bt imm,x" otherwise. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-02-03tcg/i386: Support TCG_COND_TST{EQ,NE}Richard Henderson2-37/+60
Merge tcg_out_testi into tcg_out_cmp and adjust the two uses. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-02-03tcg/i386: Move tcg_cond_to_jcc[] into tcg_out_cmpRichard Henderson1-11/+13
Return the x86 condition codes to use after the compare. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-02-03tcg/i386: Pass x86 condition codes to tcg_out_cmovRichard Henderson1-8/+8
Hoist the tcg_cond_to_jcc index outside the function. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-02-03tcg/arm: Support TCG_COND_TST{EQ,NE}Richard Henderson2-2/+29
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20231028194522.245170-12-richard.henderson@linaro.org> [PMD: Split from bigger patch, part 2/2] Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20231108145244.72421-2-philmd@linaro.org>
2024-02-03tcg/arm: Split out tcg_out_cmp()Richard Henderson1-15/+17
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20231028194522.245170-12-richard.henderson@linaro.org> [PMD: Split from bigger patch, part 1/2] Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20231108145244.72421-1-philmd@linaro.org>
2024-02-03tcg/aarch64: Generate CBNZ for TSTNE of UINT32_MAXRichard Henderson1-0/+6
... and the inverse, CBZ for TSTEQ. Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-02-03tcg/aarch64: Generate TBZ, TBNZRichard Henderson1-12/+62
Test the sign bit for LT/GE vs 0, and TSTNE/EQ vs a power of 2. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20240119224737.48943-2-philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-02-03tcg/aarch64: Massage tcg_out_brcond()Philippe Mathieu-Daudé1-8/+23
In order to ease next commit review, modify tcg_out_brcond() to switch over TCGCond. No logical change intended. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20240119224737.48943-1-philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-02-03tcg/aarch64: Support TCG_COND_TST{EQ,NE}Richard Henderson4-19/+43
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-02-03tcg: Add TCGConst argument to tcg_target_const_matchRichard Henderson11-12/+52
Fill the new argument from any condition within the opcode. Not yet used within any backend. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-02-03target/s390x: Improve general case of disas_jccRichard Henderson1-44/+22
Avoid code duplication by handling 7 of the 14 cases by inverting the test for the other 7 cases. Use TCG_COND_TSTNE for cc in {1,3}. Use (cc - 1) <= 1 for cc in {1,2}. Acked-by: Ilya Leoshkevich <iii@linux.ibm.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>