Implementing the Bubble Sort Algorithm in C++

Bubble Sort is a classic introductory sorting algorithm. Its core idea is similar to bubbles rising: by repeatedly comparing adjacent elements and swapping out-of-order pairs, smaller elements gradually "bubble" to the top of the array. The basic process is as follows: in each round, start from the first element and compare adjacent elements, swapping them if they are out of order. Each round determines the position of one maximum element, and this continues until the array is sorted. In C++ implementation, the `bubbleSort` function uses an outer loop to control the number of rounds (at most n-1 rounds). The inner loop compares adjacent elements and swaps them, with a `swapped` flag for optimization—if no swaps occur in a round, the algorithm exits early. The time complexity is O(n²) in the worst and average cases, O(n) in the best case (after optimization), with a space complexity of O(1). It is a stable sorting algorithm. Despite its low efficiency, Bubble Sort is intuitive and easy to understand. Mastering the "compare and swap" logic is key to learning the basics of sorting, making it suitable for algorithmic introductory practice.

Read More
Implementing the Radix Sort Algorithm with Python

Radix sort is a non-comparative integer sorting algorithm. Its core idea is to distribute elements into buckets and collect them by each digit (from the least significant to the most significant). The basic steps are as follows: first, determine the number of digits of the maximum number in the array. Then, from the least significant digit to the most significant digit, perform "bucket distribution" (10 buckets for digits 0-9) and "collection" operations for each digit. Elements with the same current digit are placed into the same bucket, and the array is updated by collecting them in bucket order until all digits are processed. In Python, this is implemented by looping through the digits, calculating the current digit to distribute into buckets, and then collecting. The time complexity is O(d×(n+k)) (where d is the number of digits of the maximum number, n is the array length, and k=10), and the space complexity is O(n+k). It is suitable for integer arrays with few digits. When handling negative numbers, they can first be converted to positive numbers for sorting and then their signs can be restored.

Read More
Implementing Bucket Sort Algorithm with Python

Bucket sort is a non-comparison-based sorting algorithm based on the divide-and-conquer principle. It achieves overall order by dividing data into buckets, sorting elements within each bucket, and then merging the results. Core steps: First, divide data into buckets based on distribution characteristics; then sort elements in each bucket using simple sorting methods (e.g., built-in sort); finally, merge all bucket results. Applicable scenarios: When data is uniformly distributed and within a limited range, its efficiency approaches linear time complexity (O(n)). However, non-uniform distribution may degrade it to O(n²), making it less performant than quicksort. Python implementation (taking 0-1 interval floating-point numbers as an example): Create n empty buckets (where n is the length of the data), assign data to corresponding buckets using the formula `int(num * n)`, sort elements within each bucket, and merge all bucket elements. The code is concise but requires adjusting the bucket index calculation according to the data range and optimizing bucket size to avoid extreme value concentration. Summary: Bucket sort is suitable for uniformly distributed data. It leverages divide-and-conquer to reduce complexity but requires attention to data distribution characteristics to avoid performance degradation.

Read More
Implementing the Counting Sort Algorithm in Python

Counting sort is an efficient non-comparison sorting algorithm suitable for integers with a small value range. Its time complexity is O(n + k), where n is the number of elements and k is the data range. Core steps: 1. Determine the data range (find min and max); 2. Construct a counting array to count the occurrences of each element; 3. Output the elements of the counting array in order (outputting the number of times corresponding to the count). It is stable (relative order of duplicate elements remains unchanged), and memory usage depends on the data range. It is suitable for integer data with many duplicates or a small range (e.g., exam scores). The Python implementation completes sorting through boundary handling, counting occurrences, etc. Test cases verify its applicability to arrays with duplicate elements and negative numbers.

Read More
Implementing the Merge Sort Algorithm with Python

Merge sort is based on the divide and conquer algorithm, with three core steps: divide (split the array into left and right subarrays until single elements), recursively sort (recursively sort each subarray), and merge (merge the ordered subarrays into a single ordered array). Taking the array [3, 1, 4, 2] as an example, after decomposition, each subarray is recursively sorted and then merged into [1, 2, 3, 4]. The Python implementation includes a merge function (to merge two ordered subarrays in sequence) and a recursive sorting function (to decompose and recursively call merge). Its characteristics: time complexity O(n log n), space complexity O(n) (requires additional storage for merge results), and it is a stable sort (the relative order of equal elements remains unchanged).

Read More
Implementing Heap Sort Algorithm with Python

Heap Sort is an efficient sorting algorithm that leverages the heap data structure, with a stable time complexity of O(n log n) and a space complexity of O(1), making it suitable for sorting large-scale data. A heap is a complete binary tree where parent node values are either greater than or equal to (max heap) or less than or equal to (min heap) their child node values. In an array representation, the indices of a heap follow these relationships: the children of a parent node at index i are located at 2i+1 and 2i+2, while the parent of a child node at index j is at (j-1)//2. The core operations include: 1. **Heapify**: Adjust the subtree rooted at index i to be a max heap by recursively comparing child nodes and swapping values as needed. 2. **Build Max Heap**: Starting from the last non-leaf node (n//2 - 1) and moving upward, adjust all nodes to ensure the entire tree satisfies the max heap property. The sorting process involves: first building the max heap, then repeatedly swapping the root (maximum value) with the last element of the heap, followed by calling Heapify to re-adjust the remaining elements into a max heap. This results in a sorted array from smallest to largest. Heap Sort is an in-place sorting algorithm, making it well-suited for scenarios with large data volumes.

Read More
Implementing the Selection Sort Algorithm with Python

Selection sort is a simple and intuitive sorting algorithm. Its core idea is to repeatedly select the smallest (or largest) element from the unsorted elements and place it at the end of the sorted portion until the entire array is sorted. The steps are as follows: initially, assume the current element is the smallest, traverse the unsorted portion to find a smaller element, swap it to the end of the sorted portion, and repeat until completion. In Python implementation, the outer loop variable `i` controls the end of the sorted portion (ranging from 0 to n-2). The inner loop variable `j` traverses the unsorted portion (from i+1 to n-1) to find the position `min_index` of the smallest element. Finally, swap `arr[i]` with `arr[min_index]`. For the test array [64, 25, 12, 22, 11], the sorted result is [11, 12, 22, 25, 64]. It has a time complexity of O(n²), a space complexity of O(1), and is an in-place sorting algorithm. Its characteristics are: simple to understand, but unstable (the order of identical elements may be swapped), and suitable for small-scale data.

Read More
Implementing the Shell Sort Algorithm with Python

Shell Sort is an improved version of Insertion Sort, which enhances efficiency by "coarsely sorting" and then "finely sorting" through grouping to reduce element intervals. The core involves selecting an initial increment (e.g., half the array length), dividing the array into multiple groups where elements within each group are spaced by the increment, and applying Insertion Sort to each group. The process then repeats with the increment halved until the increment reaches 1, completing the "fine sorting." Its key logic is reducing element movement through grouping: initially grouping with large intervals allows the array to become nearly sorted first, and gradually shrinking the increment ensures the final Insertion Sort phase finishes efficiently. The average time complexity is O(n log n), worst-case O(n²), with a space complexity of O(1). Shell Sort is suitable for arrays of moderate size with uneven element distribution and is an efficient in-place sorting algorithm.

Read More
Implementing the Insertion Sort Algorithm with Python

This paper introduces the insertion sort algorithm, whose core idea is to insert elements one by one into a sorted subarray, similar to the ordered insertion when organizing playing cards. The basic approach is: starting from the second element of the array, treat each element as an element to be inserted, compare it with the sorted subarray from the end to the beginning, find the appropriate position, and then insert it, ensuring the subarray remains ordered at all times. Taking the Python implementation as an example, the outer loop traverses the elements to be inserted (starting from index 1), and the inner loop uses a while loop to compare and shift elements backward. A temporary variable 'temp' is used to store the current element, which is finally inserted into the correct position. The code is an in-place sort that only uses one temporary variable, resulting in a space complexity of O(1). Time complexity: Best case (array already sorted) O(n), worst case (reverse order) O(n²); space complexity O(1). It is suitable for small-scale data or nearly sorted data, with simple implementation and stability.

Read More
Implementing the QuickSort Algorithm in Python

Quick Sort is based on the "divide and conquer" principle, with the core being selecting a pivot value to partition the array and recursively sorting the subarrays. The basic idea is: select a pivot value (e.g., the first element of the array), partition the array into two parts—elements less than and greater than the pivot—and then recursively process the subarrays. The partitioning process is critical: using left and right pointers to traverse, the right pointer moves left to find elements smaller than the pivot, while the left pointer moves right to find elements larger than the pivot. After swapping these elements, the process continues until the pointers meet. The pivot is then swapped to its final position, completing the partition. In Python implementation, the `partition` function determines the pivot position, and `quick_sort` recursively processes the left and right subarrays. Test code verifies the sorting effect. Complexity: Average case O(n log n) (when partitioning is balanced), worst case O(n²) (e.g., sorted arrays with the pivot chosen as the first element, which can be optimized by randomly selecting the pivot). Quick Sort is an efficient and practical sorting algorithm widely applied in real-world scenarios. Understanding its partitioning logic and recursive process is key to mastering sorting algorithms.

Read More
Implementing the Bubble Sort Algorithm with Python

### Bubble Sort: A Comprehensive Analysis of a Basic Sorting Algorithm Bubble Sort is based on the "bubble rising" principle. Its core idea is to repeatedly compare adjacent elements and swap those in the wrong order, allowing larger elements to gradually "bubble" to the end of the array until the entire array is sorted. The working steps are as follows: traverse the array for multiple rounds, compare and swap adjacent elements with reverse order pairs in each round, and after each round, the largest unsorted element is placed in its correct position; if no swaps occur in a round, the array is already sorted, and the process terminates early. In Python implementation, the outer loop controls the number of sorting rounds (at most n-1 rounds), the inner loop compares adjacent elements and swaps them, and a `swapped` flag is used to optimize the termination condition. The worst-case time complexity is O(n²) (for a completely reversed array), the best-case is O(n) (for a sorted array with optimization), the space complexity is O(1), and it is a stable sorting algorithm. Bubble Sort is simple and intuitive, suitable for small-scale data, and serves as a foundational understanding of sorting concepts. By examining its principles and Python code implementation, one can quickly grasp the core logic of comparing and swapping adjacent elements.

Read More
Implementing Radix Sort Algorithm in Java

Radix sort is a non-comparison integer sorting algorithm that processes digits from the least significant to the most significant. It distributes each number into "buckets" based on the current digit, then collects them back into the original array in bucket order, repeating until all digits are processed. It is suitable for integers with few digits and has high efficiency. The basic idea is "distribute-collect-repeat": distribute numbers into corresponding buckets by the current digit (units, tens, etc.), collect them back into the array in bucket order, and repeat for all digits. Taking the array [5, 3, 8, 12, 23, 100] as an example, it is sorted after three rounds of processing: units, tens, and hundreds. In Java code, the maximum number determines the highest digit, and `(num / radix) % 10` is used to get the current digit. ArrayLists are used as buckets to implement distribution and collection. The time complexity is O(d(n+k)) (where d is the number of digits of the maximum number and k=10), and the space complexity is O(n+k). This algorithm is stable and suitable for integer sorting. Negative numbers can be separated into positive and negative groups, sorted separately, and then merged.

Read More
Implementing Bucket Sort Algorithm in Java

Bucket sort is a non-comparison-based sorting algorithm. Its core idea is to distribute data into several "buckets", sort each bucket locally, and then merge the sorted buckets. It is suitable for scenarios where data is uniformly distributed and the range is not large (e.g., integers with a controllable range). The steps are: determine the number and range of buckets (e.g., for integers in the range 0 to max, the number of buckets is max+1), create corresponding bucket containers, traverse the elements to distribute them into the appropriate buckets, sort each bucket internally (e.g., using insertion sort or built-in methods), and finally merge the elements in the order of the buckets. The time complexity is ideally O(n), and the space complexity is O(n). Its advantages include high efficiency when data is uniformly distributed, while its disadvantages are space waste when the data range is large and efficiency degradation when the data distribution is uneven.

Read More
Implementing the Counting Sort Algorithm in Java

Counting sort is a simple and intuitive non-comparison sorting algorithm. It determines the position of elements by counting their occurrences and using prefix sums. It is suitable for scenarios where the element range is small (e.g., integers), there are many repeated elements, and stable sorting is required. The core idea is: first, determine the element range (find min and max), count the occurrences of each element, calculate the prefix sum to get the final position of the element, and then traverse the original array from the end to generate the sorted result. Implementation steps: handle edge cases (no sorting needed for empty/single-element arrays), determine min/max, create a count array to tally occurrences, compute prefix sums (accumulate to get the final position of elements), and traverse from the end to generate the result. The time complexity is O(n+k) (where n is the array length and k is the element range), and the space complexity is O(n+k). Applicable scenarios include small integer ranges (e.g., scores, ages), many repeated elements, and the need for stable sorting. This algorithm achieves sorting through counting and accumulation without comparisons, making it suitable for beginners to understand the basic ideas of sorting.

Read More
Implementing the Merge Sort Algorithm in Java

Merge sort is an efficient sorting algorithm based on the divide-and-conquer paradigm, with three core steps: divide, conquer, and merge. It recursively splits the array into single-element subarrays, sorts these subarrays, and finally merges two ordered subarrays into a fully ordered array. In Java implementation, the `mergeSort` method recursively divides the array into left and right halves, sorts each half, and then calls the `merge` method to combine them. The `merge` method uses three pointers to traverse the left and right subarrays, compares elements, and fills the result array, while directly copying remaining elements. Algorithm complexity: Time complexity is O(n log n) (each merge operation takes O(n) time, with log n recursive levels), space complexity is O(n) (requires extra space for storing merged results), and it is a stable sort (relative order of equal elements is preserved). Merge sort has a clear logic and is suitable for large-scale data sorting. It serves as a classic example of divide-and-conquer algorithms, efficiently sorting by recursively splitting and merging ordered subarrays.

Read More
Implementing Heap Sort Algorithm in Java

Heap sort is an efficient sorting algorithm based on the heap data structure, with a time complexity of O(n log n) and a space complexity of O(1). It is an in-place sorting algorithm suitable for large-scale data. A heap is a special complete binary tree, divided into a max-heap (parent node value is greater than child node values) and a min-heap. Heap sort uses a max-heap. The core idea is: each time, take the maximum value at the top of the heap and place it at the end of the array, then adjust the remaining elements to form a new max-heap, and repeat until the array is sorted. The implementation consists of three steps: constructing a max-heap (starting from the last non-leaf node and using heapify to adjust each node); heap adjustment (recursively adjusting the subtree to maintain the max-heap property); and the sorting process (swapping the top of the heap with the end element, reducing the heap size, and repeating the adjustment). The core function heapify adjusts the subtree to a max-heap by comparing parent and child nodes recursively; buildMaxHeap constructs a complete max-heap starting from the second-to-last node; the main function integrates the above steps to complete the sorting. Heap sort achieves ordering through efficient heap adjustment, is suitable for scenarios with space constraints, and is an efficient choice for sorting large-scale data.

Read More
Implementing the Selection Sort Algorithm in Java

Selection sort is a simple and intuitive sorting algorithm. Its core idea is to repeatedly select the smallest (or largest) element from the unsorted portion and place it at the end of the sorted portion until the entire array is sorted. The basic approach involves an outer loop to determine the end position of the sorted portion, and an inner loop to find the minimum value in the unsorted portion, followed by swapping this minimum value with the element at the current position of the outer loop. In Java implementation, the `selectionSort` method is implemented with two nested loops: the outer loop iterates through the array (with `i` ranging from 0 to `n-2`), and the inner loop (with `j` ranging from `i+1` to `n-1`) finds the index `minIndex` of the minimum value in the unsorted portion. Finally, the element at position `i` is swapped with the element at `minIndex`. Taking the array `{64, 25, 12, 22, 11}` as an example, the sorted array `[11, 12, 22, 25, 64]` is gradually constructed through each round of swaps. The time complexity is O(n²), making it suitable for small-scale data. This algorithm has a simple logic and easy-to-implement code, serving as a typical example for understanding the basic sorting concepts.

Read More
Implementing Shell Sort Algorithm with Java

Shell Sort is an improved version of Insertion Sort that reduces the number of element movements during inversions by grouping elements. The core idea is to introduce a step size (Gap), which divides the array into Gap subsequences. After performing insertion sort on each subsequence, the Gap is gradually reduced to 1 (equivalent to standard Insertion Sort). Algorithm steps: Initialize Gap as half the array length. Perform insertion sort on each subsequence, then reduce the Gap and repeat until Gap becomes 0. In Java implementation, the outer loop controls the Gap to decrease from n/2. The inner loop iterates through elements, using a temporary variable to store the current element, then compares and shifts elements forward to their correct positions to complete insertion. Testing with the array {12, 34, 54, 2, 3} results in the sorted output [2, 3, 12, 34, 54]. By gradually ordering elements through grouping, Shell Sort improves efficiency, and optimizing the step size sequence (e.g., 3k+1) can further enhance performance.

Read More
Implementing the Insertion Sort Algorithm in Java

Insertion sort is a simple and intuitive sorting algorithm. Its core idea is to insert unsorted elements one by one into their correct positions in the sorted part, similar to organizing playing cards. It is suitable for small-scale data and has a simple implementation. Basic idea: Starting from the second element, mark the current element as the "element to be inserted". Compare it with the elements in the sorted part from back to front. If the sorted element is larger, shift it backward until the insertion position is found. Repeat this process until all elements are processed. In Java implementation, the element to be inserted needs to be saved, and the insertion is completed by looping through comparisons and shifting elements backward. The time complexity of the algorithm is: best O(n) (when already sorted), worst and average O(n²); space complexity O(1) (in-place sorting); stable sort, suitable for small-scale data or nearly sorted data. Its core lies in "gradual insertion", with simple implementation. Its stability and in-place nature make it perform well in small-scale sorting.

Read More
Implementing QuickSort Algorithm in Java

QuickSort is based on the divide-and-conquer approach. Its core involves selecting a pivot element to partition the array into elements less than and greater than the pivot, followed by recursively sorting the subarrays. With an average time complexity of O(n log n), it is a commonly used and efficient sorting algorithm. **Basic Steps**: 1. Select a pivot (e.g., the rightmost element). 2. Partition the array based on the pivot. 3. Recursively sort the left and right subarrays. **Partition Logic**: Using the rightmost element as the pivot, define an index `i` to point to the end of the "less than pivot" region. Traverse the array, swapping elements smaller than the pivot into this region. Finally, move the pivot to its correct position. The Java code implements this logic. The time complexity is O(n log n) on average and O(n²) in the worst case, with an average space complexity of O(log n). A notable drawback is that QuickSort is an unstable sort, and its worst-case performance can be poor, so optimizing the pivot selection is crucial to improve performance.

Read More
Implementing the Bubble Sort Algorithm in Java

Bubble Sort is a basic sorting algorithm whose core idea is to repeatedly compare adjacent elements and swap their positions, allowing larger elements to "bubble up" to the end of the array (in ascending order). Its sorting process is completed through multiple iterations: each iteration determines the position of the largest element in the current unsorted portion and moves it to the end until the array is sorted. In Java implementation, the outer loop controls the number of sorting rounds (at most n-1 rounds), while the inner loop compares adjacent elements and performs swaps. A key optimization is using a `swapped` flag; if no swaps occur in a round, the algorithm terminates early, reducing the best-case time complexity to O(n). The worst and average-case time complexities are O(n²), with a space complexity of O(1) (in-place sorting). Despite its simple and intuitive principle, which makes it suitable for teaching the core concepts of sorting, bubble sort is inefficient and only applicable for small-scale data or educational scenarios. For large-scale data sorting, more efficient algorithms like Quick Sort are typically used.

Read More
Introduction to PyTorch Neural Networks: Fully Connected Layers and Backpropagation Principles

This paper introduces the basics of PyTorch neural networks, with a core focus on fully connected layers and backpropagation. A fully connected layer enables full connectivity between neurons of the previous layer and the current layer, producing an output calculated as the product of a weight matrix and the input, plus a bias vector. Forward propagation is the forward computation process of data from the input layer through fully connected layers and activation functions to the output layer, for example, in a two - layer network: input → fully connected → ReLU → fully connected → output. Backpropagation is the core of neural network learning, adjusting parameters through gradient descent. Based on the chain rule, it reversely calculates the gradient of the loss with respect to each parameter starting from the output layer. PyTorch's autograd automatically records the computation graph and completes gradient calculation. The process includes forward propagation, loss calculation, backpropagation (loss.backward()), and parameter update (using an optimizer like SGD). Key concepts: Fully connected layers implement feature combination, forward propagation performs forward computation, backpropagation minimizes loss through gradient descent, and automatic differentiation simplifies gradient calculation. Understanding these principles is conducive to model debugging and optimization.

Read More
Quick Start with PyTorch: Tensor Dimension Transformation and Common Operations

This article introduces the core knowledge of PyTorch tensors, including basics, dimension transformations, common operations, and exercise suggestions. Tensors are the basic structure for storing data in PyTorch, similar to NumPy arrays, and support GPU acceleration and automatic differentiation. They can be created using `torch.tensor()` from lists/numbers, `torch.from_numpy()` from NumPy arrays, or built-in functions to generate tensors of all zeros, ones, or random values. Dimension transformation is a key operation: `reshape()` flexibly adjusts the shape (keeping the total number of elements unchanged), `squeeze()` removes singleton dimensions, `unsqueeze()` adds singleton dimensions, and `transpose()`/`permute()` swap dimensions. Common operations include basic arithmetic operations, matrix multiplication with `matmul()`, broadcasting (automatic dimension expansion for operations), and aggregation operations such as `sum()`, `mean()`, and `max()`. The article suggests consolidating tensor operations through exercises, such as dimension adjustment, broadcasting mechanisms, and dimension swapping, to master the "shape language" and lay a foundation for subsequent model construction.

Read More
PyTorch Basics Tutorial: Practical Data Loading with Dataset and DataLoader

Data loading is a crucial step in machine learning training, and PyTorch's `Dataset` and `DataLoader` are core tools for efficient data management. As an abstract base class for data storage, `Dataset` requires inheriting to implement `__getitem__` (to read a single sample) and `__len__` (to get the total number of samples). Alternatively, `TensorDataset` can be directly used to wrap tensor data. `DataLoader`, on the other hand, handles batch processing and supports parameters such as `batch_size` (batch size), `shuffle` (shuffling order), and `num_workers` (multithreaded loading) to optimize training efficiency. In practice, taking MNIST as an example, image data can be loaded via `torchvision`, and combined with `Dataset` and `DataLoader` to achieve efficient iteration. It should be noted that under Windows, `num_workers` is defaulted to 0 to avoid memory issues. During training, `shuffle=True` should be used to shuffle the data, while `shuffle=False` is set for the validation/test sets to ensure reproducibility. Key steps: 1. Define a `Dataset` to store data; 2. Create a `DataLoader` with specified parameters; 3. Iterate over the `DataLoader` to input data into the model for training. These two components are the cornerstones of data processing. Once mastered, they can be flexibly applied to various data loading requirements.

Read More
Playing with PyTorch from Scratch: Data Visualization and Model Evaluation Techniques

This article introduces core skills of data visualization and model evaluation in PyTorch to facilitate efficient model debugging. For data visualization, Matplotlib can observe data distributions (e.g., histograms of MNIST samples and labels), and TensorBoard can monitor training processes (e.g., scalar changes, model structures). In model evaluation, classification tasks should focus on accuracy and confusion matrices (e.g., MNIST classification example), while regression tasks use MSE and MAE. In practice, using visualization to identify issues (e.g., confusion between "8" and "9") enables iterative model optimization. Advanced applications include GAN visualization and real-time metric calculation. Mastering these skills allows quick problem localization and data understanding, laying a foundation for developing complex models.

Read More