- Optimized: Single query with CASE statements (60-6000x faster)
- Legacy: Preserved original implementation for easy rollback
- Rollback flag: private bool \$useOptimizedQuery = true
- Tests: 12 total tests (8 optimization + 4 service)
Performance: 1 query vs 6N queries (N = client count)
Data quality: 100% match validated by test suite
Problem:
Users were sending time_log data as associative arrays with keys
like 'start_time', 'end_time', 'date', 'billable' instead of the
expected flat array format [int, int, string, bool].
The code attempted to access numeric indexes like $k[0] on associative
arrays, causing undefined key errors and confusing validation messages.
Solution:
Added early structure validation to detect and reject invalid formats:
1. Check if entry is an array
2. Detect associative arrays (has string keys)
3. Ensure numeric indexes [0] and [1] exist before type checking
4. Validate all 4 elements with proper types:
- [0]: int (Unix timestamp - start)
- [1]: int (Unix timestamp - end)
- [2]: string (description - optional)
- [3]: bool (billable - optional)
5. Improved error messages that clearly explain expected format
Error Messages:
- Shows position of invalid entry
- Shows expected format
- Shows what was received (keys for associative, types for invalid)
- Clear guidance for fixing the issue
Example Error:
"Time log entry at position 0 uses invalid format. Expected:
[unix_start, unix_end, description, billable]. Received associative
array with keys: start_time, end_time, date, billable"
Files modified:
- app/Http/Requests/Task/StoreTaskRequest.php
- app/Http/Requests/Task/UpdateTaskRequest.php
Previously the command would timeout after 600 seconds (10 minutes)
per model when using --wait flag. This was insufficient for large
datasets and could cause queue congestion.
Changes:
- Removed $maxWaitSeconds = 600 limitation
- Changed while condition from timeout check to infinite loop
- Removed timeout warning code
- Command now waits indefinitely until jobs complete
- Still exits early when jobs detected as complete
- Still exits on exception after 10 second delay
Behavior:
- Command will run until all jobs complete or exception occurs
- Can be manually killed with Ctrl+C if needed
- Better for production with large datasets (25k+ records)