SuperTinyKernel™ RTOS 1.06.0
Lightweight, high-performance, deterministic, bare-metal C++ RTOS for resource-constrained embedded systems. MIT Open Source License.
Loading...
Searching...
No Matches
stk::hw::SpinLock Class Reference

Atomic busy-wait lock used as the global cross-core synchronisation primitive inside CriticalSection. More...

#include <stk_arch.h>

Public Types

enum  EState {
  UNLOCKED = 0 ,
  LOCKED
}
 Internal lock state values. More...

Public Member Functions

 SpinLock ()
 Construct a SpinLock (unlocked by default).
void Lock ()
 Acquire SpinLock, blocking until it is available.
void Unlock ()
 Release SpinLock, allowing another thread or core to acquire it.
bool TryLock ()
 Attempt to acquire SpinLock in a single non-blocking attempt.
bool IsLocked () const
 Sample current lock state.

Protected Member Functions

 STK_NONCOPYABLE_CLASS (SpinLock)
volatile bool m_lock __stk_aligned (8)
 Lock state (see EState). 8-byte aligned to occupy its own cache line word and avoid false sharing on SMP targets.

Detailed Description

Atomic busy-wait lock used as the global cross-core synchronisation primitive inside CriticalSection.

Note
Implemented using an atomic test-and-set (or hardware spinlock peripheral on RP2040) so that it is safe across multiple CPU cores. CriticalSection::Enter() acquires this lock (via g_CsuLock) after masking local interrupts, giving the combined interrupt-mask + cross-core guarantee described in CriticalSection.
SpinLock is exposed as a public API for use cases that need a bare cross-core lock without interrupt masking, for example protecting data shared only between two tasks on different cores where ISR access is not a concern.
Use only for very short, low-latency critical sections. Spinning wastes CPU cycles and can increase interrupt latency and power consumption.
Non-recursive: calling Lock() twice from the same thread/core without an intervening Unlock() will deadlock. The ARM implementation includes a debug-break timeout (0xFFFFFF iterations) to catch lock-not-released bugs in debug builds.
See also
CriticalSection

Definition at line 274 of file stk_arch.h.

Member Enumeration Documentation

◆ EState

Internal lock state values.

Enumerator
UNLOCKED 

Lock is free and available for acquisition.

LOCKED 

Lock is held by a thread or core.

Definition at line 280 of file stk_arch.h.

281 {
282 UNLOCKED = 0,
283 LOCKED
284 };
@ UNLOCKED
Lock is free and available for acquisition.
Definition stk_arch.h:282
@ LOCKED
Lock is held by a thread or core.
Definition stk_arch.h:283

Constructor & Destructor Documentation

◆ SpinLock()

stk::hw::SpinLock::SpinLock ( )
inlineexplicit

Construct a SpinLock (unlocked by default).

Definition at line 288 of file stk_arch.h.

288 : m_lock(UNLOCKED)
289 {}

References UNLOCKED.

Referenced by STK_NONCOPYABLE_CLASS().

Here is the caller graph for this function:

Member Function Documentation

◆ __stk_aligned()

volatile bool m_lock stk::hw::SpinLock::__stk_aligned ( 8 )
protected

Lock state (see EState). 8-byte aligned to occupy its own cache line word and avoid false sharing on SMP targets.

◆ IsLocked()

bool stk::hw::SpinLock::IsLocked ( ) const
inline

Sample current lock state.

Returns
true if the lock is currently held; false if it is free.
Note
The result is a snapshot only. On SMP systems another core may acquire or release the lock between this read and any subsequent action, so IsLocked() must not be used as a synchronisation check. Use TryLock() or Lock() for safe acquisition.

Definition at line 325 of file stk_arch.h.

325{ return (m_lock == LOCKED); }

References LOCKED.

◆ Lock()

void stk::hw::SpinLock::Lock ( )

Acquire SpinLock, blocking until it is available.

Note
Busy-waits (spins) using __stk_relax_cpu() until the lock transitions to UNLOCKED and this call wins the atomic acquisition.
Warning
Non-recursive. Calling Lock() a second time from the same thread/core while already holding the lock will spin forever (deadlock).
Calling Lock() from an ISR while the interrupted task holds the same lock will also deadlock. Prefer CriticalSection for ISR-to-task synchronisation.

◆ STK_NONCOPYABLE_CLASS()

stk::hw::SpinLock::STK_NONCOPYABLE_CLASS ( SpinLock )
protected

References SpinLock().

Here is the call graph for this function:

◆ TryLock()

bool stk::hw::SpinLock::TryLock ( )

Attempt to acquire SpinLock in a single non-blocking attempt.

Returns
true if the lock was acquired; false if it was already held by another thread/core.
Note
Returns immediately regardless of lock state. On success the caller holds the lock and must call Unlock() when done. On failure the lock state is unchanged.
Useful in try-acquire / back-off patterns or when a fallback action is available if the resource is busy.

◆ Unlock()

void stk::hw::SpinLock::Unlock ( )

Release SpinLock, allowing another thread or core to acquire it.

Note
The lock transitions immediately to UNLOCKED. If another core is spinning in Lock(), it will acquire the lock on its next successful atomic attempt.
Warning
Must only be called by the thread or core that currently holds the lock (via Lock() or a successful TryLock()). Calling Unlock() without a prior acquisition produces undefined behaviour.

The documentation for this class was generated from the following file: