Home » Football » Budissa Bautzen (Germany)

Budissa Bautzen: Rising Stars in the NOFV-Oberliga Süd!

Overview / Introduction about the Team

Budissa Bautzen is a prominent football team based in Bautzen, Germany. Competing in the Regionalliga Nordost, Budissa Bautzen has established itself as a formidable force within the league. Founded in 1991, the club plays its home matches at Stadion der Freundschaft. Currently, under the guidance of coach Stefan Emmerling, Budissa Bautzen continues to strive for excellence in German football.

Team History and Achievements

Budissa Bautzen boasts a rich history with several notable achievements. The club has consistently performed well in the Regionalliga Nordost, securing top positions over the years. Their most significant title came when they won the Sachsenliga in 2013. Additionally, Budissa Bautzen has made remarkable appearances in various cup competitions and remains a respected name among regional football teams.

Current Squad and Key Players

The current squad of Budissa Bautzen features a mix of experienced players and promising talents. Key players include:

  • Marcus Schmidt – Goalkeeper
  • Felix Müller – Defender
  • Lukas Weber – Midfielder
  • Niklas Richter – Forward

Team Playing Style and Tactics

Budissa Bautzen typically employs a 4-3-3 formation that emphasizes strong defensive play coupled with quick counterattacks. The team’s strategy revolves around maintaining possession and exploiting spaces through swift transitions. Their strengths lie in their disciplined defense and tactical flexibility, although they occasionally struggle with consistency in their attacking output.

Interesting Facts and Unique Traits

Budissa Bautzen is affectionately known as “Die Blauen” (The Blues) due to their iconic blue jerseys. The team enjoys a passionate fanbase that supports them through thick and thin. One of their fiercest rivalries is with VFC Plauen, often leading to electrifying matches filled with intense competition.

Lists & Rankings of Players, Stats, or Performance Metrics

  • ✅ Marcus Schmidt: Best goalkeeper with 15 clean sheets this season.
  • ❌ Lukas Weber: Needs improvement in passing accuracy.
  • 🎰 Felix Müller: Consistently strong defensive performance.
  • 💡 Niklas Richter: Top scorer with 12 goals this season.

Comparisons with Other Teams in the League or Division

In comparison to other teams within Regionalliga Nordost, Budissa Bautzen stands out for its tactical discipline and solid defensive record. While some teams may have more star-studded lineups, Budissa’s balanced approach often gives them an edge in tightly contested matches.

Case Studies or Notable Matches

A memorable match for Budissa Bautzen was their victory against FC Eilenburg last season where they secured a 3-1 win that boosted their league standing significantly. This game highlighted their ability to perform under pressure and showcased key players’ contributions effectively.


<tr th[0]: # Copyright 2017 Google Inc.
[1]: #
[2]: # Licensed under the Apache License, Version 2.0 (the "License");
[3]: # you may not use this file except in compliance with the License.
[4]: # You may obtain a copy of the License at
[5]: #
[6]: # http://www.apache.org/licenses/LICENSE-2.0
[7]: #
[8]: # Unless required by applicable law or agreed to in writing, software
[9]: # distributed under the License is distributed on an "AS IS" BASIS,
[10]: # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
[11]: # See the License for the specific language governing permissions and
[12]: # limitations under the License.

[13]: """A library for creating datasets."""

[14]: from __future__ import absolute_import
[15]: from __future__ import division

[16]: from __future__ import print_function

[17]: import collections
[18]: import functools
[19]: import itertools
[20]: import math

[21]: import numpy as np

[22]: from tensorflow.contrib.learn.python.learn.datasets import base

[23]: def _int64_feature(value):
[24]: return base.Feature(int64_list=base.Int64List(value=[value]))

[25]: def _bytes_feature(value):
[26]: return base.Feature(bytes_list=base.BytesList(value=[value]))

[27]: class Dataset(object):

class Mnist(base.Dataset):

def __init__(self,
train,
one_hot=False,
dtype=dtypes.float32,
reshape=True,
seed=None):

"""Construct a DataSet.

one_hot arg is used only if fake_data is true. `dtype` can be either
`uint8` to leave the input as `[0, 255]`, or `float32` to rescale into
`[0, 1]`. seed arg provides for convenient deterministic testing.

"""

dtype = dtypes.as_dtype(dtype).base_dtype

self._num_examples = mnist.train.num_examples if train else mnist.test.num_examples

self._index_in_epoch = None

self._epochs_completed = 0

***** Tag Data *****
ID: 4
description: Class definition for Mnist dataset handling within TensorFlow's Dataset
framework including initialization parameters such as training mode flagging (`train`),
data type conversion (`dtype`), reshaping options (`reshape`), and seed setting
for deterministic testing (`seed`). This snippet illustrates advanced usage patterns
within TensorFlow's data pipeline management.
start line: 27
end line: 42
dependencies:
– type: Class
name: Mnist
start line: 27
end line: 42
context description: This snippet demonstrates how to set up an MNIST dataset within
TensorFlow's Dataset API while providing flexibility through various initialization
parameters like data type conversion options and deterministic testing capabilities.
algorithmic depth: 4
algorithmic depth external: N
obscurity: 4
advanced coding concepts: 4
interesting for students: A
self contained: Y

*************
## Suggestions for complexity

1. **Custom Data Augmentation**: Implement custom data augmentation techniques specifically tailored to enhance MNIST dataset variability without using standard libraries like `tf.image`.

2. **Adaptive Learning Rate**: Integrate an adaptive learning rate mechanism within this setup that adjusts based on epoch completion metrics.

3. **Advanced Seed Management**: Develop an advanced seeding mechanism that allows multiple seeds influencing different aspects of dataset preparation (e.g., shuffling order vs initial data transformation).

4. **Multi-modal Input Handling**: Extend functionality to handle multi-modal inputs (e.g., combining MNIST images with textual metadata) while ensuring seamless integration into existing training pipelines.

5. **Distributed Training Compatibility**: Modify code structure to support distributed training across multiple GPUs or TPUs while maintaining efficient data loading.

## Conversation

: Need custom augmentation without tf.image

: Sure! We can manually implement custom augmentation functions like random rotations or shifts directly on NumPy arrays before converting them back into TensorFlow tensors.

Here’s an example:

python
import numpy as np

def random_rotation(image_array):
angle = np.random.uniform(-15,15)
return ndimage.rotate(image_array.reshape(28,28), angle).flatten()

def random_shift(image_array):
shift_val = np.random.randint(-5,5)
return ndimage.shift(image_array.reshape(28,28), shift_val).flatten()

You can integrate these functions into your dataset preprocessing pipeline:

python
def preprocess_image(image):
image = random_rotation(image)
image = random_shift(image)
return image

# In your main code:
dataset.map(lambda x,y : (preprocess_image(x), y))

This will add randomness without relying on `tf.image`.
<>: Hi there! I’m working on implementing a neural network model called PCGrad_Net using PyTorch Lightning which incorporates gradient surgery techniques like PCGrad during training updates.

Here’s what I have so far:

python
class PCGrad_Net(pl.LightningModule):
def __init__(self,**kargs):
super(PCGrad_Net,self).__init__()
self.save_hyperparameters(kargs)
self.net = kargs[“net”]
self.lr = kargs.get(“lr”,0.001)
self.optimizer = torch.optim.Adam(self.parameters(),lr=self.lr)
self.loss_fn=kargs[“loss_fn”]
self.loss_weight= kargs.get(“loss_weight”,None)
def forward(self,x,**kwargs):
return self.net(x,**kwargs)
def step(self,batch,batch_idx,criterion,kw=None):
if not isinstance(criterion,(list,tuple)):
criterion=list([criterion])
if kw is None:
kw=[{} for c_ in criterion]
else:
if not isinstance(kw,(list,tuple)):
kw=[kw]
kw=[kw]*len(criterion)
total_loss= torch.zeros((1,),device=batch[list(batch.keys())[0]].device,index=torch.long)
task_grad=list()
for c_,kw_,los_ in zip(criterion,kw,self.loss_weight):
if los_ is None:
lb_=los_
else:
lb_=los_[batch_idx]
pred,self.feat=self(x=batch[‘input’],**kw_)
lss=c_(pred,batch[‘target’],self.feat) * lb_
total_loss+=lss.mean()
self.manual_backward(lss,{n:’loss’ for n in self.feat.keys()})
tg=tuple(self.feat[n].grad.clone().detach() for n in self.feat.keys())
task_grad.append(tg)
self.zero_grad(set_to_none=True)
pcgrad=torch.zeros_like(task_grad[-1][0])
for gts_ in task_grad:
for gt_,g_ in zip(gts_,pcgrad):
g_=gt_-(gt_*g_).sum()/torch.norm(g_)**2*g_
gt_.data.detach()
g_.data.copy_(gt_)
del gts_
for n,g_ in zip(self.feat.keys(),pcgrad):
self.feat[n].grad=g_

I’m feeling a bit confused about how exactly I’m supposed to integrate PCGrad into my gradient update process here? Could you help clarify how this part works or suggest any improvements? Thanks!
<>: python
Sure! Let’s break down how you’re integrating PCGrad into your gradient update process.

In your `step` method:

1. **Loss Calculation**: You iterate over each criterion and compute individual losses weighted by `self.loss_weight`. These losses are accumulated into `total_loss`.

2. **Gradient Calculation**: For each loss component:
– You compute predictions using `self(x=batch[‘input’], **kw_)`.
– Calculate loss using `c_(pred, batch[‘target’], self.feat) * lb_`.
– Use `manual_backward` to compute gradients for each feature key stored in `self.feat`.

3. **PCGrad Integration**:
– You store gradients for each task/task component in `task_grad`.
– Initialize `pcgrad` as zero tensors matching dimensions of task gradients.
– For each pair of gradients `(gt_, g_)`, adjust `g_` using PCGrad formula:
[
g_ = gt_ – frac{(gt_*g_).sum()}{text{torch.norm}(g_)^2} times g_
]
– Update gradients stored back into `self.feat`.

### Suggestions:

– Ensure that all tensors are correctly detached where necessary to prevent unwanted gradient tracking.
– Consider adding comments or breaking down complex operations into smaller functions for clarity.
– Verify that your loss weights (`self.loss_weight`) are correctly initialized and applied.

This should give you a clearer understanding of how PCGrad is integrated into your model’s training loop! If you have specific parts you’d like more detail on, feel free to ask! 😊

<>: Got it! I think breaking down those nested loops could really help readability. Maybe I should create separate functions for calculating individual losses and applying PCGrad? Also, I’m not entirely sure if my tensor operations are optimal—any tips on ensuring they’re efficient?
<>: Breaking down complex operations into separate functions is a great idea! It enhances readability and maintainability.

### Suggestions:

1. **Separate Functions**:
– Create a function like `calculate_individual_loss()` to handle loss computation.
– Another function like `apply_pcgrad()` can manage gradient adjustments.

2. **Efficient Tensor Operations**:
– Use `.detach()` wisely to avoid unnecessary gradient tracking.
– Leverage PyTorch operations that work directly on tensors without loops when possible.
– Use `.clone()` sparingly; it can be expensive if overused.

3. **Profiling**:
– Use PyTorch’s built-in profiler (`torch.profiler`) to identify bottlenecks.
– Check memory usage with `.numel()` on tensors to ensure you’re not holding onto unnecessary data.

Implementing these changes should make your code cleaner and potentially more efficient! 🚀 If you need further assistance with specific parts, feel free to ask!
<>: Alright, I’ll try breaking it down into separate functions like you suggested. Here’s what I’m thinking:

python
class PCGrad_Net(pl.LightningModule):
def calculate_individual_loss(self, pred, target, feat_key_values_tuple):
pred_output = pred[self.feature_keys]
loss_value = sum([c(pred_output[k], target[k]) * lb for k,(c,lb)in enumerate(feat_key_values_tuple)])
return loss_value.mean()

def apply_pcgrad(self, task_grad):
pcgrad = torch.zeros_like(task_grad[-1][0])
for gts_ in task_grad:
for gt_, g_ in zip(gts_, pcgrad):
g_ = gt_ – (gt_*g_).sum()/torch.norm(g_)**2 * g_
gt_.data.detach()
g_.data.copy_(gt_)
return pcgrad

# Then integrate these functions into the step method…

Does this look like a good start? Also any specific tips on profiling these tensor operations?
<>: That looks like a solid start! Separating concerns will definitely make your code more readable.

### Tips on Profiling Tensor Operations:

1. **Use PyTorch Profiler**:
python
import torch.profiler

with torch.profiler.profile(
activities=[
torch.profiler.ProfilerActivity.CPU,
torch.profiler.ProfilerActivity.CUDA],
record_shapes=True,
profile_memory=True) as prof:
# Call your training step here

print(prof.key_averages().table(sort_by=”cpu_time_total”))

2. **Focus Areas**:
– Identify time-consuming operations by sorting by CPU/GPU time.
– Check memory allocations using `.record_memory()` option.

3. **Optimize Based on Findings**:
– Look for operations that take up most time/memory and consider alternatives or optimizations.

By profiling first before optimizing ensures you focus efforts where it matters most! Keep refining those functions—you’re on the right track! 🚀 If you need further help integrating these changes or interpreting profiling results, let me know!
<>: Great! I’ll integrate these functions into my step method next:

python
class PCGrad_Net(pl.LightningModule):
def step(self,batch,batch_idx,criterion,kw=None):
if not isinstance(criterion,(list,tuple)):
criterion=list([criterion])
if kw is None:
kw=[{}for c_ in criterion]
else:
if not isinstance(kw,(list,tuple)):
kw=[kw]
kw=[kw]*len(criterion)

total_loss=torch.zeros((1,),device=batch[list(batch.keys())[0]].device,index=torch.long)
task_grad=list()

feat_key_values_tuples=[]
preds=self(x=batch[‘input’],**kw_[batch_idx])

feat_key_values_tuples+=[[[(c,preds[n],lb_[batch_idx])for n,c,_in enumerate(zip(*feat))]]for c_,kw_,lb_in zip(criterion,kw,self.loss_weight)]

total_loss+=sum([self.calculate_individual_loss(preds,kw_,feat_key_values_tuple)for kw_,feat_key_values_tuplein feat_key_values_tuples])

self.manual_backward(total_loss,{n:’loss’for nin self.feature_keys})

task_grad+=[[x.grad.clone().detach()for xin self.feature_keys]]

pcgrad=self.apply_pcgrad(task_grad)

del total_loss;del task_grad;del preds;gc.collect()

[x.grad.copy_(p_)for x,p_inzip(self.feature_keys(pcgrad)])

# Now I’ll run some profiling tests…

Does this integration look correct? Any other tips before I dive into profiling?
<>: Your integration looks well-structured! Here are a few additional tips before profiling:

### Final Checks Before Profiling:

1. **Ensure Correctness**:
– Double-check that all variables are correctly initialized and used within loops.
– Verify that gradients are properly detached/cloned where necessary.

2. **Memory Management**:
– Ensure all unused variables are deleted promptly using `del` followed by garbage collection (`gc.collect()`).

3. **Debugging**:
– Run small test cases first to ensure everything behaves as expected before full-scale profiling.

### Profiling Tips:

– Start with CPU-only profiling if GPU resources are limited initially.
– Focus on identifying any unexpected high-memory usage areas first.

Once everything checks out smoothly at this stage, proceed with profiling confidently! 🚀 If anything seems off during testing or profiling results need interpretation help, feel free to reach out!
<>: Awesome, thanks! Before diving into profiling tests on larger datasets or models with multiple GPUs/mixed precision training setups later on—how should I modify my current setup? Any specific considerations or changes needed?
<>: When scaling up your setup for larger datasets or multi-GPU/mixed precision training, consider these adjustments:

### Multi-GPU Setup:

1. **Distributed Data Parallel (DDP)**:
python
model = pl.Trainer(…).fit(model.to(‘cuda’))

2. **Ensure Synchronization**:
Make sure all processes synchronize properly during backward passes.

### Mixed Precision Training:

1. **Enable AMP (Automatic Mixed Precision)**:

python
from torch.cuda import amp

scaler = amp.GradScaler()

with amp.autocast():
output = model(input_data)

scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()

### General Considerations:

– **Batch Size**: Adjust batch size according to available memory capacity per GPU.

– **Data Loading**: Optimize data loading by using multiple workers (`num_workers`) in your DataLoader.

– **Memory Management**: Monitor memory usage closely; use `.half()` judiciously when switching between FP16/FP32 modes.

These modifications will help ensure efficient resource utilization when scaling up your experiments! 🚀 Let me know if you need further guidance!
<>: Perfect, I’ll keep those points in mind when scaling up later. For now, I’ll focus on getting everything running smoothly at this scale before moving forward with DDP and AMP adjustments. Thanks again!
<>: Sounds like a solid plan! Focusing on stability at your current scale will set a strong foundation before scaling up with DDP and AMP adjustments later on.

If any issues arise during testing at this stage or when transitioning later stages feel overwhelming—feel free to reach out!

Good luck with your implementation—I’m here if you need further assistance along the way! 🚀<>: Hi there,

I’ve been trying to implement an asynchronous message processing system where messages are received from Redis streams via callbacks triggered by polling events from Redis subscriptions/pipelines/events notifications mechanisms provided by asyncio event loop policies such as aioredis etc… The idea is also considering failure scenarios where we might want our callback function being invoked again after some delay instead of just failing outright immediately after encountering an error condition such as connection issues etc…

Here’s my current implementation attempt but something seems off because it doesn’t behave as expected when handling exceptions during message processing:

python
async def poll(pubsub_name FloodWardenPubSubNames.RADIO_URL_LIST.value):
while True :
try :
message=receive_message(‘RADIO_URL_LIST’, pubsub_name=FloodWardenPubSubNames.RADIO_URL_LIST.value )
print(“Got message:” + str(message))
flood_warden.print_and_publish(message)

except Exception as e :
print(“Failed processing message”)
print(e)
failed_processing_queue.put_nowait(message)

def process_failed_processing_queue():
global failed_processing_queue
while True :
try :
message=failed_processing_queue.get_nowait()
print(“Processing failed queue”)
flood_warden.print_and_publish(message)

except QueueEmpty :
pass

except Exception as e :
print(“Failed processing failed queue “)
print(e)
time.sleep(30)

if __name__ == ‘__main__’:
start=time.time()
queue=Queue(maxsize=10000)
threading.Thread(target=process_failed_processing_queue,name=’failed_processor’).start()
for i_topic_name,i_poll_func,i_async_func,i_incantatem_word,i_additional_parameters_list,
i_is_background_thread_topic_name,in_is_asynchronous_topic_name,in_is_polling_topic_name
in_is_single_executable_topic_name,in_failure_exception_policy,in_custom_failure_exception
in_i_flood_warden_decorators_args_list,in_i_flood_warden_decorators_kwargs_dict
in_i_custom_publish_function
in_i_custom_publish_function_args_list,in_i_custom_publish_function_kwargs_dict
in_incantatem_decorators_args_list,in_incantatem_decorators_kwargs_dict
in_i_am_a_background_thread,i_use_threads,i_use_asyncio,i_use_incantatem,i_use_logging
i_do_i_allow_thou_to_poll_for_more_messages,i_do_i_allow_thou_to_process_the_failed_queue
i_do_i_allow_thou_to_make_http_request_callshelp,
i_do_i_allow_thou_to_process_stdin_commands,i_do_i_allow_thou_to_listen_for_sockets,in_rate_limit_per_minute
i_should_i_be_called_with_my_args_list,i_should_i_be_called_with_my_kwargs_dict
i_rate_limited_log_function_helper,i_rate_limited_print_helper,i_rate_limited_publish_helper
i_rate_limited_flood_warden_print_and_publish_helper,o_help_help_help,o_o_help_help,o_help_o_help,o_o_o_help,o_help_help_o,o_o_help_o,o_help_o_o,o_o_o_o
help_me_please,got_configured,yep_im_configured,yep_im_an_incantatem_datapump,yep_im_a_poller,yep_im_all_of_the_above,yep_im_a_background_thread_poller,is_it_safe_to_forever_poll,yep_im_allowed_to_forever_poll,yep_im_allowed_to_process_the_failed_queue,yep_im_allowed_to_make_http_request_calls
is_it_safe_for_me_to_process_stdin_commands,yep_im_allowed_to_process_stdin_commands,is_it_safe_for_me_to_listen_for_sockets,yep_im_allowed_to_listen_for_sockets,is_it_safe_for_me_to_be_invoked_with_my_args_list,yep_im_allowed_to_be_invoked_with_my_args_list,is_it_safe_for_me_to_be_invoked_with_my_kwargs_dict,yep_im_allowed_to_be_invoked_with_my_kwargs_dict
is_it_safe_for_me_to_be_rate_limited_when_logging ,yep_im_allowed_to_be_rate_limited_when_logging,is_it_safe_for_me_to_be_rate_limited_when_printing ,yep_im_allowed_to_be_rate_limited_when_printing ,is_it_safe_for_me_when_rate_limiting_can_pubish ,yep_im_allowed_when_rate_limiting_can_pubish ,is_it_safe_for_me_when_rate_limiting_can_flood_warden_print_and_publish ,yep_im_allowed_when_rate_limiting_can_flood_warden_print_and_publish
in_description_of_this_topice,in_warnings_about_this_topice,in_details_about_this_topice,
in_party_mode_toggle_flag_reference_variable_name,in_banana_peel_references,in_water_references,
in_mushroom_references,in_wool_references,in_rainbow_references,in_unicorns_references
o_O_HELP_HELP_HELP,O_O_HELP_HELP,O_HELP_O_HELP,O_O_O_HELP,O_HELP_HELP_O,O_O_HELP_O,O_HELP_O_O,O_O_O_O,I_am_an_incantatem_datapump,I_am_a_poller,I_am_all_of_the_above,I_am_a_background_thread_datapump,I_am_all_of_the_above,I_am_not_any_of_the_above,I_am_not_any_of_the_above,I_am_not_any_of_the_above,I_dont_know_yet:I_dont_know_yet:I_dont_know_yet:I_dont_know_yet:I_dont_know_yet:I_dont_know_yet:I_dont_know_yet:I_dont_know_yet:=I_dont_know_yet:=I_dont_know_yet:=I_dont_know_yent:=I_dont_know_yent:=I_dont_know_yent:=I_dont_know_yent:=I_dont_know_yent:=I_dont_know_yent:=True,True,True,True,True,False,False,False,False,FLOODWATCHER_CONFIGURED,FLOODWATCHER_CONFIGURED,FLOODWATCHER_IS_AN_INCANTATEM_DATAPUMP,FLOODWATCHER_IS_A_POLLER,FLOODWATCHER_IS_ALL_OF_THE_ABOVE,FLOODWATCHER_IS_A_BACKGROUND_THREAD_POLLER,FLOODWATCHER_IS_ALL_OF_THE_ABOVE,FLOODWATCHER_IS_NOT_ANY_OF_THE_ABOVE,FLOODWATCHER_IS_NOT_ANY_OF_THE_ABOVE,FLOODWATCHER_IS_NOT_ANY_OF_THE_ABOVE,’FLOODEDMONITOR’,False,False,False,False,’FLOODEDMONITOR’,False,False,’FLOODEDMONITOR’,False,’FLOODEDMONITOR’,’FLOODEDMONITOR’,’FLOODEDMONITOR’,’FLOODEDMONITOR’,’FLOODEDMONITOR’,’FLOODEDMONITOR’,’FLOODEDMONITOR’,’NOTHING_TO_SEE_HERE_MOVE_ONSHEEPISHLY’,False,False,False,”It doesn’t do anything useful.”,’There aren’t any warnings.’,’There aren’t any details.’,’party_mode_toggle_flag_reference_variable_name’,[],[],[],[],[],[],[],asyncio.run_coroutine_threadsafe(asyncio.sleep(60),loop=event_loop),asyncio.run_coroutine_threadsafe(asyncio.sleep(60),loop=event_loop),asyncio.run_coroutine_threadsafe(asyncio.sleep(60),loop=event_loop),asyncio.run_coroutine_threadsafe(asyncio.sleep(60),loop=event_loop)) :
asyncio.run_coroutine_threadsafe(poll(pubsub_name=FloodWardenPubSubNames.RADIO_URL_LIST.value ),loop=event_loop )
time_taken=(time.time()-start)/60
print(f”It took {time_taken} minutes”)

However when there’s an exception during message processing it seems stuck indefinitely instead of retrying after some delay which was intended behavior based upon certain conditions specified above…

Here’s what traceback looks like when exception occurs but no retries happen afterwards :

Traceback (most recent call last):
File “my_script.py”, line X,Y,Z,…
File “my_script.py”, line A,B,C,…
ExceptionType : Exception Message …
File “my_script.py”, line P,Q,R,…
ExceptionType : Another Exception Message …
File “my_script.py”, line S,T,U,…
ExceptionType : Yet Another Exception Message …

Could someone please point out what might be going wrong here?

Thanks!

<>: It looks like there might be some issues causing your system not to retry after exceptions occur during message processing.

Here are some suggestions:

1. **Check Exception Handling:** Ensure that exceptions are being caught correctly within both `poll` and `process_failed_processing_queue`. If an exception isn’t caught properly at one level but rather propagates upwards without being handled elsewhere appropriately causing indefinite blocking behavior due missing retry logic inside exception handling blocks themselves .

2.. Retry Logic Implementation:** In both exception handling blocks (`poll` function & failed queue processor loop), add logic explicitly retry after catching exceptions . For instance :

python
except Exception as e :
print(“Failed processing message”)
print(e)
failed_processing_queue.put_nowait(message)
await asyncio.sleep(RETRY_DELAY_SECONDS) # Add delay before retrying next iteration/processing next item from queue .

And similarly add delay logic inside second exception handler block too :

except QueueEmpty :
pass

except Exception as e :
print(“Failed processing failed queue “)
print(e)
await asyncio.sleep(RETRY_DELAY_SECONDS) # Add delay before retrying next iteration/processing next item from queue .

Note that we use ‘await asyncio.sleep()’ instead ‘time.sleep()’ since latter would block entire thread whereas former suspends coroutine execution allowing other tasks continue running concurrently until sleep duration elapses .

3.. Avoid Blocking Calls:** Make sure none blocking calls inside async methods . Replace ‘time.sleep()’ usages wherever found inside async methods/code paths .

These changes should help introduce desired retry behavior upon encountering errors while still allowing other tasks/processes continue executing concurrently via asyncio event loop mechanisms ! 😊

Let me know if there’s anything else needed !

Statistic Category Budissa Bautzen Stats (2023)
Total Wins 10
Total Draws 8
Total Losses 7
Average Goals Scored per Game 1.8
Average Goals Conceded per Game 1.1
Last Five Form (W-D-L) 3-1-1
Head-to-Head Record vs VFC Plauen (2023)
Budissa Wins/Losses/Draws vs VFC Plauen 1-0-0