Overview / Introduction
The Vienna Capitals are an esteemed ice hockey team based in Vienna, Austria. They compete in the Austrian Hockey League (EBEL), which is the top-tier professional ice hockey league in Austria. Founded in 1995, the team has become a significant presence in Austrian sports culture under the management of current head coach Peter Draisaitl.
Team History and Achievements
The Vienna Capitals have a rich history filled with notable achievements. Since their inception, they have secured multiple Austrian Championship titles, including victories in 2006, 2007, and 2013. The team has also been runners-up on numerous occasions and has consistently performed well in league standings.
Current Squad and Key Players
The current squad features several key players who significantly impact the team’s performance. Notable among them is defenseman Nicolas Wöss, known for his defensive prowess and leadership on the ice. Forward Lukas Haudum is another star player, renowned for his scoring ability and playmaking skills.
Team Playing Style and Tactics
The Vienna Capitals are known for their robust defensive tactics and disciplined play style. They often employ a balanced formation that emphasizes both defense and offensive opportunities. Their strengths lie in strong goaltending and solid team coordination, while weaknesses can include occasional lapses in maintaining offensive pressure.
Interesting Facts and Unique Traits
Fans affectionately call the Vienna Capitals “Capitals” or “Vienna’s Ice Warriors.” The team boasts a passionate fanbase known as “The Capitalists,” who support them fervently at home games. Rivalries with teams like EC Red Bull Salzburg add to the excitement of each season.
Lists & Rankings of Players
- Lukas Haudum: Top scorer ✅
- Nicolas Wöss: Defensive leader ❌ (due to recent injury)
- Jordan Knackstedt:: Goaltender standout 🎰
Comparisons with Other Teams
In comparison to other teams within the EBEL, such as EC Red Bull Salzburg or HC Innsbruck, the Vienna Capitals maintain a competitive edge due to their strategic depth and experienced roster.
Case Studies or Notable Matches
A pivotal match was during the 2013 season when they clinched their third championship title after a series of high-stakes games against formidable opponents like EHC Linz.
| Statistic | Vienna Capitals |
|---|---|
| Last Season Standings | 4th Place |
| Last 10 Games Form | W-L-W-L-W-W-L-W-L-W (5-5) |
| Head-to-Head Record vs Red Bull Salzburg | Winning: 18 – Losing: 15 – Draws: 6 |
Tips & Recommendations for Betting Analysis 💡 Advice Blocks
To effectively analyze betting opportunities for Vienna Capitals matches:
- Analyze recent form trends to predict performance consistency.
- Evaluate head-to-head records against upcoming opponents for insights into potential outcomes.
- Monitor player fitness levels; injuries can significantly impact game results.
“The Vienna Capitals’ strategic depth makes them a formidable opponent every season,” says sports analyst Mark Thompson.
Pros & Cons of Current Team Form or Performance ✅❌ Lists
- Promising Pros:
- Solid defensive strategies ✅
- Cohesive teamwork ✅
- Potential Cons:</u[0]: #!/usr/bin/env python
[1]: """
[2]: This script generates a 'best guess' file from an input file.
[3]: """
[4]: import argparse
[5]: import logging
[6]: import os
[7]: import re
[8]: import sys
[9]: from . import __version__
[10]: from .config import Config
[11]: from .file_utils import get_file_handle
[12]: class BestGuessGenerator(object):
[13]: """
[14]: Generate best guess file from input file.
[15]: :param config: Configuration object.
[16]: :ivar _config: Configuration object.
[17]: :ivar _input_file_handle: Input file handle.
[18]: :ivar _output_file_handle: Output file handle.
[19]: """
[20]: def __init__(self, config):
[21]: self._config = config
[22]: def generate(self):
***** Tag Data *****
ID: 1
description: A placeholder method `generate` within `BestGuessGenerator` class that
appears to be intended for complex operations but currently contains only nested
comments/indentation.
start line: 20
end line: 46
dependencies:
– type: Class
name: BestGuessGenerator
start line: 12
end line: 21
context description: This method is supposed to generate a 'best guess' file from an
input file using configurations provided by `Config`. The complexity lies in its
intended purpose rather than its current implementation which consists solely of
nested comments/indentations.
algorithmic depth: N4
algorithmic depth external: N
obscurity: 4
advanced coding concepts: 4
interesting for students: 5
self contained: Y
*************
## Suggestions for complexity
1. **Dynamic Configuration Loading**: Implement logic to dynamically load different configurations based on runtime parameters or environment variables.
2. **Custom Exception Handling**: Introduce custom exception classes that provide detailed error messages specific to various stages of generating the 'best guess' file.
3. **Asynchronous File Processing**: Modify the `generate` method to process files asynchronously using Python's `asyncio` library to handle large input files efficiently.
4. **Data Validation Layer**: Add a data validation layer before processing starts that ensures all necessary configuration parameters are correctly set up.
5. **Machine Learning Integration**: Integrate a machine learning model that predicts some part of the 'best guess' output based on historical data patterns.
## Conversation
I’ve got this method here [SNIPPET], but it’s just full of nested comments right now… Any idea how I can make it actually do something useful? Specifically thinking about making it dynamically load different configs based on runtime parameters.
Sure thing! To dynamically load different configurations based on runtime parameters, you could modify your `__init__` method to accept additional parameters like environment variables or command-line arguments that determine which configuration to use.
Here’s how you might adjust your code:
python
import os
class BestGuessGenerator(object):
def __init__(self, config_name=None):
if config_name is None:
config_name = os.getenv(‘CONFIG_NAME’, ‘default’)
self._config = self.load_config(config_name)
def load_config(self, config_name):
# Logic here to load different configurations based on config_name
if config_name == ‘default’:
return DefaultConfig()
elif config_name == ‘alternative’:
return AlternativeConfig()
else:
raise ValueError(f”Unknown configuration name {config_name}”)
This way you can dynamically choose which configuration to use at runtime by setting an environment variable `CONFIG_NAME` or passing it directly when initializing your class.
Okay cool but what if I want custom exceptions thrown if there’s any issue with loading these configs? Like maybe if certain required keys are missing?
Great idea! Custom exception handling will make debugging much easier. You can define custom exceptions specifically for configuration issues:
python
class ConfigError(Exception):
pass
class MissingKeyError(ConfigError):
def __init__(self, key):
super().__init__(f”Missing required configuration key: {key}”)
class BestGuessGenerator(object):
def __init__(self, config_name=None):
if config_name is None:
config_name = os.getenv(‘CONFIG_NAME’, ‘default’)
self._config = self.load_config(config_name)
def load_config(self, config_name):
try:
if config_name == ‘default’:
return DefaultConfig()
elif config_name == ‘alternative’:
return AlternativeConfig()
else:
raise ConfigError(f”Unknown configuration name {config_name}”)
# Example check for required keys after loading configuration
required_keys = [‘key1’, ‘key2′]
missing_keys = [key for key in required_keys if key not in self._config]
if missing_keys:
raise MissingKeyError(missing_keys)
except ConfigError as e:
print(f”Configuration Error occurred: {e}”)
Now you’ll have more granular control over what goes wrong during configuration loading.
Nice! Now let’s say I want this whole process asynchronous because my input files are huge sometimes… How would I go about doing that?
To handle large input files asynchronously, you can use Python’s `asyncio` library along with asynchronous file reading techniques such as those provided by `aiofiles`. Here’s how you might refactor your code:
Firstly install aiofiles via pip:
bash
pip install aiofiles
Then update your code:
python
import asyncio
import aiofiles
class BestGuessGenerator(object):
async def __init__(self, config):
self._config = await self.load_config(config)
async def load_config(self, config):
# Assuming some async operation here
await asyncio.sleep(0)
return DefaultConfig()
async def generate(self):
async with aiofiles.open(self._config.input_file_path(), mode=’r’) as infile_handle,
aiofiles.open(self._config.output_file_path(), mode=’w’) as outfile_handle:
async for line in infile_handle:
processed_line = await self.process_line(line)
await outfile_handle.write(processed_line + ‘n’)
async def process_line(self, line):
# Process each line asynchronously
await asyncio.sleep(0)
return f”Processed {line.strip()}”
This will allow your program to read from and write to files asynchronously without blocking other operations.
Awesome stuff! But before we finish up… What about adding some kind of data validation layer before processing starts? Like ensuring all necessary params are set up properly?
`Absolutely! Adding a data validation layer will help catch issues early before processing begins.` Here’s how you could integrate such functionality:
python
class ValidationError(Exception):
pass
class MissingParameterError(ValidationError):
def __init__(self, parameter):
super().__init__(f”Missing required parameter ‘{parameter}'”)
class BestGuessGenerator(object):
async def __init__(self, config):
await self.validate_config(config)
self._config = await self.load_config(config)
async def validate_config(self, config):
required_params = [‘input_file_path’, ‘output_file_path’]
missing_params = [param for param in required_params if not getattr(config,param)]
if missing_params:
raise MissingParameterError(missing_params)
async def load_config(self, config):
await asyncio.sleep(0)
return DefaultConfig()
async def generate(self):
async with aiofiles.open(self._config.input_file_path(), mode=’r’) as infile_handle,
aiofiles.open(self._config.output_file_path(), mode=’w’) as outfile_handle:
async for line in infile_handle:
processed_line = await self.process_line(line)
await outfile_handle.write(processed_line + ‘n’)
async def process_line(self,line):
await asyncio.sleep(0)
return f”Processed {line.strip()}”
# Usage example assuming DefaultConfig has attributes input_file_path() and output_file_path():
# generator = BestGuessGenerator(DefaultConfig())
# asyncio.run(generator.generate())
This setup ensures that all necessary parameters are validated before proceeding with any processing steps.<|
<>: Hi there,
I’m working on implementing gradient computation within my training loop using TensorFlow Probability distributions like Bernoulli and Normal. The idea is to calculate gradients using either log probabilities or sample-based methods depending on whether we’re sampling directly or indirectly.
However, I’m running into an issue where my code isn’t behaving as expected when using indirect sampling (`sample_based=True`). It seems like there might be some logical error causing incorrect gradient calculations when using samples instead of log probabilities directly.
Here’s the buggy version of my code snippet:
python
if rank == hvd.rank():
mean_grads_sample_log_pdf.append(torch.autograd.grad(sample_log_pdf.sum(), dist.params)[0].numpy().astype(
np.float32))
if args.mean_grad_every_n >= 1:
mean_grads_event_log_pdf.append(torch.autograd.grad(event_log_pdf.sum(), dist.params)[0].numpy().astype(
np.float32))
if rank == hvd.rank():
mean_grads_sample_loss.append(torch.autograd.grad(sample_loss.sum(), dist.params)[0].numpy().astype(
np.float32))
if args.mean_grad_every_n >= 1:
mean_grads_event_loss.append(torch.autograd.grad(event_loss.sum(), dist.params)[0].numpy().astype(
np.float32))
if rank == hvd.rank():
if sample_based==True:#Indirect gradients (Score Function Gradient Estimate)
else:#Direct gradients (REINFORCE estimate)
#Gradients computed analytically from log probability functions (score function gradients)
for p,v in zip(model.parameters(), mean_grads_event_log_pdf[-1]):
And here’s the traceback I’m getting:
Traceback (most recent call last):
File “train.py”, line XX, in train_model_loop_code_for_blog_post_preamble_and_samplelogpdf_and_eventlogpdf_and_loss_computations_for_gating_gradients_via_score_function_or_reinforce_with_just_some_testing_code_to_get_started_and_wip_crap_may_2021_changed_args_to_mean_grad_every_n_to_bool_may_24_2021_new_changing_norm_thres__and_more_may_26_2021_newly_adding_input_dist_layer_may_27_2021_newly_adding_output_dist_layer_june_01_2021_newly_adding_argparse_code_june_08_2021_newly_switching_from_manual_seed_setting_to_argparse_passed_seed_setting_july_12_2021_updated_to_fix_flaky_test_june_29_didnt_fix_it_properly_july_12_see_pr_comment_july_13_2021_fixing_flaky_test_july_19_2021_moved_flaky_test_under_right_main_training_loop_conditionally_called_by_main_augmenting_data_in_train_model_loop_code_with_their_two_synthetic_feature_columns_via_tfd_four_parameterization_synthetic_mode_switch_august_03rd_2021_now_using_the_custom_scripted_pipeline_via_args.with_customized_pipeline,
File “”, line YY,
File “/path/to/tensorflow/python/ops/math_ops.py”, line ZZ,
ValueError(“No gradients provided for any variable”)
I suspect there might be an issue with how I’m handling conditions around direct versus indirect sampling logic but I’m not entirely sure where it went wrong. Could someone help me figure out why no gradients are being calculated correctly?
Thanks!
<>: python
It looks like there might be some indentation issues causing parts of your code not to execute under certain conditions. Specifically, ensure that your logic inside conditional blocks is correctly indented so that all necessary operations are executed when expected.
Here are a few suggestions:
1. **Indentation Check**: Make sure all lines under conditional statements (`if`) are properly indented so they execute only when intended.
2. **Conditional Logic**: Double-check your logic around `sample_based`. Ensure that both branches (`True` and `False`) have complete logic paths defined.
3. **Gradient Calculation**: Verify that `dist.params` contains tensors requiring gradients (`requires_grad=True`). If not set correctly, TensorFlow won’t compute gradients.
4. **Debugging Gradients**: Use print statements or logging inside each condition block to confirm which parts of your code execute during runtime.
5. **TensorFlow Version**: Ensure compatibility between TensorFlow Probability and TensorFlow versions used.
Here’s a small snippet illustrating proper indentation:
python
if sample_based == True:
# Indirect gradients (Score Function Gradient Estimate)
else:
# Direct gradients (REINFORCE estimate)
for p,v in zip(model.parameters(), mean_grads_event_log_pdf[-1]):
By ensuring these points are addressed, you should be able to resolve why no gradients are being calculated correctly. Let me know if you need further assistance! 😊
<>: Alright, I see what you’re saying about the indentation possibly causing issues with execution flow. I’ll double-check that everything is properly aligned under each conditional block.
Also, regarding ensuring tensors require gradients—how do I verify that `dist.params` has `requires_grad=True`? Should I explicitly set this somewhere?
Additionally, could there be any other logical errors related to how I’m handling direct versus indirect sampling? Maybe something subtle I’m overlooking?
Let me know what you think!
<>: To verify that tensors have `requires_grad=True`, you can iterate over `dist.params` and check each tensor individually:
python
for param in dist.params:
print(param.requires_grad)
If any tensor does not have `requires_grad=True`, ensure it’s set appropriately during initialization or model definition.
Regarding logical errors between direct versus indirect sampling:
– **Indirect Sampling Logic**: Ensure that when using samples (`sample_based=True`), your computations align with score function gradient estimation principles.
– **Direct Sampling Logic**: For direct methods (`sample_based=False`), ensure correct implementation of REINFORCE estimates.
– **Consistent Parameter Usage**: Make sure consistent parameters are used across both methods unless intentionally varied.
– **Debugging Steps**: Add debug prints inside each branch to confirm expected execution paths and values.
These checks should help identify any subtle issues affecting gradient calculations. Let me know how it goes! 😊<>: Hi there,
I’ve been trying to implement this function `_get_x_y_coords_from_region_coordinates(region)` which is supposed to convert region coordinates into x-y coordinates based on some geometric transformations involving trigonometric functions like sine and cosine rotations around specified angles at given centers `(x_cen_iptc`, `y_cen_iptc)`.
However, something seems off with my implementation because it doesn’t seem to produce correct results when testing various inputs related to region coordinates transformation around these centers using angles derived from polar coordinates `(phi)` converted into radians `(rad)` by multiplying by `(pi/180)`).
Here’s my buggy version of the code:
python
def _get_x_y_coords_from_region_coordinates(region):
(x_ctctra_pt_one_loc_x,x_ctctra_pt_one_loc_y)=region.loc_bounding_box_vertices_pts[‘Bounding Box Vertex One’].location
x_ctctra_pt_one_loc_x=x_ctctra_pt_one_loc_x*region.scaling_factor;
x_ctctra_pt_one_loc_y=x_ctctra_pt_one_loc_y*region.scaling_factor;
(x_ctctra_pt_two_loc_x,x_ctctra_pt_two_loc_y)=region.loc_bounding_box_vertices_pts[‘Bounding Box Vertex Two’].location
x_ctctra_pt_two_loc_x=x_ctctra_pt_two_loc_x*region.scaling_factor;
x_ctctra_pt_two_loc_y=x_ctctra_pt_two_loc_y*region.scaling_factor;
rad=(np.pi)/180
y_top_to_leg=abs(x_ctctra_pt_one_loc_y-x_ctctra_pt_two_loc_y);
x_side_len=(abs(x_ctctra_pt_two_loc_x-x_ctctra_pt_one_loc_x)+y_top_to_leg)/np.cos((90-region.rot_angle)*rad);
y_ht_of_traingle=abs(x_side_len**2-(y_top_to_leg**2));
y_btm_len=y_ht_of_traingle*np.sin((90-region.rot_angle)*rad);
y_hypotenuse=y_btm_len/y_cos;
x_adjacent_horiz_len=y_hypotenuse*np.cos((90-region.rot_angle)*rad);
x_opposite_horiz_len=y_hypotenuse*np.sin((90-region.rot_angle)*rad);
if region.rot_angle >0 :
x_adjacent_horiz_len=-x_adjacent_horiz_len
x_ctr_of_rot_circle=x_ctctra_pt_one_loc_x+(x_adjacent_horiz_len/2);
y_ctr_of_rot_circle=x_ctctra_pt_one_loc_y+(y_top_to_leg/2)+y_btm_len;
radius_of_rot_circle=(int)(y_ht_of_traingle/(np.sin((90-region.rot_angle)*rad)))
xcirc=radius_of_rot_circle*np.cos(rad*(np.arange(360))*region.rot_angle)+x_ctr_of_rot_circle;
ycirc=radius_of_rot_circle*np.sin(rad*(np.arange(360))*region.rot_angle)+y_ctr_of_rot_circle;
return xcirc.tolist(),ycirc.tolist();
And here’s the traceback error message I’m encountering:
plaintext
Traceback (most recent call last):
File “coordinate_transformation.py”, line XX-in _get_x_y_coords_from_region_coordinates(region)
NameError:name ‘y_cos’ is not defined”
I’m confused about why this error occurs since everything seemed logically placed according to my understanding of trigonometric transformations around circles centered at specific points derived from bounding box vertices scaled appropriately by region scaling factors!
Could someone help me figure out what might be going wrong here? Why am I getting this undefined variable error even though everything else seems mathematically sound?
Thanks!
<>: python
It looks like the error is due to the undefined variable `y_cos`. This variable isn’t defined anywhere in your code snippet before it’s used in calculating `y_hypotenuse`. To fix this issue, you should replace `y_cos` with its intended value or calculation.
From context clues within your code snippet where trigonometric functions like cosine are used elsewhere with angles converted from degrees using radians (`rad * angle_in_degrees`), it seems likely that `y_cos` should represent something similar—perhaps related to calculating cosine values at specific angles involved in your transformation logic.
Here’s one way you might adjust your code:
Replace this line:
python
y_hypotenuse=y_btm_len/y_cos;
With something more appropriate based on what you intend for `y_cos`. If it was meant to be cosine at `(90 – region.rot_angle)`, then replace it with:
python
y_hypotenuse=y_btm_len / np.cos((90 – region.rot_angle) * rad);
Make sure this change aligns with your mathematical intentions behind these transformations!
Additionally review other parts where trigonometric functions interact closely with angles converted via radians; ensuring consistency will help avoid similar issues elsewhere in your calculations.
If further issues arise after making these changes feel free ask again! 😊
Hope this helps!
<>: Thanks for pointing out the undefined variable issue! Now that I’ve fixed that part by replacing y_cos with np.cos((90 – region.rot_angle) * rad), I’m still facing some inaccuracies in the transformed coordinates when testing different rotation angles around various centers `(x_cen_iptc`, y_cen_iptc)`.
I suspect there might be an issue with how distances between vertices are calculated after scaling or possibly incorrect application of rotation matrices around these centers instead of just relative points `(x_ctctra…)`.
Could you guide me on how to ensure accurate distance calculations post-scaling and verify correct application of rotation matrices centered at `(x_cen_iptc`, y_cen_iptc)`?
<>: Certainly! To ensure accurate distance calculations post-scaling and correct application of rotation matrices centered at `(x_cen_iptc`, y_cen_iptc)`, follow these steps:
### Step-by-Step Guide
#### Scaling Verification
1. **Scale Correctly:** Ensure each vertex coordinate is scaled properly before applying transformations.
python
x_scaled = x_original * region.scaling_factor
y_scaled = y_original * region.scaling_factor
#### Rotation Matrix Application
2. **Translate Points:** Move points so `(x_cen_iptc`, y_cen_iptc)` becomes the origin temporarily.
python
x_translated = x_scaled – x_cen_iptc
y_translated = y_scaled – y_cen_iptc
3. **Apply Rotation Matrix:** Use rotation matrix formulas centered at origin.
[
begin{bmatrix}
x_{rotated} \
y_{rotated}
end{bmatrix}
=
begin{bmatrix}
cos(theta) & -sin(theta) \
sin(theta) & cos(theta)
end{bmatrix}
begin{bmatrix}
x_{translated} \
y_{translated}
end{bmatrix}
]
4. **Reverse Translation:** Move back by adding center coordinates.
python
x_final = x_rotated + x_cen_iptc
y_final = y_rotated + y_cen_iptc
#### Example Code Snippet
Here’s how you can apply these steps programmatically:
python
import numpy as np
def rotate_point(x_scaled, y_scaled, angle_rad, center_xcen_iprcgtpcdtciqjoejtpcoejtcojqeijtcjeoiqetjcqeijtcjeoijteojciejtcoiqetcjoitecjtoiejtocietjoceitjcoetjcoejtcjeoijteojciejtcoiqetcjoitecjtoiejtocietjoceitjcoetjcoejtcjeoijteojciejtcoiqetcjoitecjtoiejtocietjoceitjcoetjcoejtcjeoijteojciejtcoiqetcjoitecjtoiejtocietjoceitjcoe):
angle_rad = angle_deg * (np.pi / 180)
# Translate point relative to center of rotation (make center origin)
x_translated = x_scaled – center_xcen_iprcgtpcdtciqjoejtpcoejtcojqeijtcjeoiqetjcqeijtcjeoijteojciejtcoiqetcjoitecjtoiejtocietjoceitjcoetjcoejtcjeoijteojciejtcoiqetcjoitecjtoiejtocietjoceitjcoe.xcen_iprcgtpcdtciqjoejtpcoejtcojqeijtcjeoiqetjcqeijtcjeoijteojciejtcoiqetcjoitecjtoiejtocietjoceitjcoe.ycen_iprcgtpcdtciqjoejtpcoejtcojqeijtcjeoiqetjcqeijtcjeoijteojciejtcoiqetcjoitecjtoiejtocietjoceitjcoe)
# Apply rotation matrix formulae above…
x_rotated_temporarytransienttemporaltemporarystaticvariableusedforelementarycalculationstemporarilyusedforrotationtransformationpurposestemporarilyassignedtemporaryvariablenamefortemporarycalculationofrotationtransformations…
y_rotated_temporarytransienttemporaltemporarystaticvariableusedforelementarycalculationstemporarilyusedforrotationtransformationpurposestemporarilyassignedtemporaryvariablenamefortemporarycalculationofrotationtransformations…
# Reverse translation back relative original center position…
x_final_temporarytransienttemporaltemporarystaticvariableusedforelementarycalculationstemporarilyusedforrotationtransformationpurposestemporarilyassignedtemporaryvariablenamefortemporarycalculationofrotationtransformations…
y_final_temporarytransienttemporaltemporarystaticvariableusedforelementarycalculationstemporarilyusedforrotationtransformationpurposestemporarilyassignedtemporaryvariablenamefortemporarycalculationofrotationtransformations…
return x_final_temporarytransienttemporaltemporarystaticvariableusedforelementarycalculationstemporarilyusedforrotationtransformationpurposestemporarilyassignedtemporaryvariablenamefortemporarycalculationofrotationtransformations…,
y_final_temporarytransienttemporaltemporarystaticvariableusedforelementarycalculationstemporarilyusedforrotationtransformationpurposestemporarilyassignedtemporaryvariablenamefortemporarycalculationofrotationtransformations…
### Conclusion
By following these steps carefully—scaling first then translating points relative to centers before applying rotations—you should achieve accurate transformations around specified centers without inaccuracies due misapplication rotations relative initial points rather than centers themselves!
Feel free reach out again need further clarification adjustments! 😊✨<>: Hi there,
I’m working on implementing Tarjan’s algorithm using PyTorch tensors instead of Python lists because lists don’t support GPU acceleration well enough for our needs at scale (~100k nodes). Here’s my attempt at translating Tarjan’s algorithm into PyTorch syntax:
python
def visit(u,neg_edges,score_list,u_visited,parent,time_stamp,
prev_time_stamp,u_scc_color,scc_root_idx,
scc_set_tensor,set_list,scc_size_limit):
u_visited[u] = True
def tarjan(u,neg_edges,score_list,u_visited,parent,time_stamp,
prev_time_stamp,u_scc_color,scc_root_idx,
scc_set_tensor,set_list,scc_size_limit):
while len(scc_set_tensor)<=len(score_list)//scc_size_limit:
result=[]
result_dic={}
time_stamp=torch.zeros_like(score_list,dtype=torch.long)
parent=torch.ones_like(score_list,dtype=torch.long)*-1
u_visited=torch.zeros_like(score_list,dtype=torch.bool)
u_scc_color=torch.zeros_like(score_list,dtype=torch.long)
prev_time_stamp=time_stamp.clone()
num_nodes=len(score_list)
set_list=[]
scc_root_idx=[]
num_complete_nodes=0
neg_edges=neg_edges.long()
while num_complete_nodes<num_nodes:
potential_fill_val=num_complete_nodes+scc_size_limit+10000
while potential_fill_valcur_time_step:
find_start=scipy.special.argmax(u_scc_color==cur_time_step)
start=u_scc_color!=cur_time_step
start[start]=find_start.item()+torch.arange(start.sum())
idx=start.nonzero()[:,0]
torch.cuda.empty_cache()
group=set()
check_group=set()
num_groups=0
i=idx.size(0)-1
while i>=0:
current=u_scc_color[idx[i]]
check_group.add(idx[i])
group.add(idx[i])
i-=1
while True:
new_added=False
candidates=u_scc_color==current
candidates[:current]=False
candidates[idx]=False
candidates_nonzero=candidates.nonzero()[:,0]
if candidates_nonzero.size(0)==0:
break
i=candidates_nonzero.size(0)-1
while i>=0:
neighbor=u_neg_edge[candidates_nonzero[i]]
neighbor_time_max=time_max[min_index==neighbor]
assert neighbor_time_maxscc_size_limit:
break
elif len(check_group)==len(group):
break
else:
diff=check_group-group
diff_node=list(diff)[0]
diff_pre=time_prev[min_index==diff_node]
diff_cur=time_max[min_index==diff_node]
min_diff=dict()
min_diff_arr=torch.tensor([float(‘inf’)]).to(device)
candidate=list(check_group-diff)
for i_candidate,candidate_element in enumerate(candidate):
min_diff_arr=torch.min(min(min_diff_arr,diff_pre-min_index==candidate_element))
min_diff[candidate[i_candidate]]=min_diff_arr.item()
min_diff_node=min(min_diff,key=min_diff.get)
time_prev[min_index==min_diff_node]=diff_cur
check_group.remove(min_diff_node)
group.remove(min_diff_node)
i-=1
num_groups+=len(group)
group=set()
del check_group
gc.collect()
torch.cuda.empty_cache()
root=list(group)
root.sort()
scipy.special.arraysetops.union(scc_set_tensor[root],idx[group])
scipy.special.arraysetops.union(prev_set_tensor[root],idx[group])
scipy.special.arraysetops.union(min_index_set[root],idx[group])
scpc.special.arraysetops.union(time_prev_set[root],time_prev[idx[group]])
scpc.special.arraysetops.union(time_max_set[root],time_max[idx[group]])
del group
gc.collect()
torch.cuda.empty_cache()
cur_time_step+=len(root)
result_dic[cur_time_step]=[root]
result.append(root)
u_scc_color[idx[group]]=cur_time_step+10000000000000
del idx[group]
del root
return result,result_dic
def run_tarjan(neg_edge_index,neg_edge_type,neg_edge_score,num_nodes,special_scores=None,max_num_cc=-9999999999999,cuda_id=-1):
global device
device=f’cuda:{cuda_id}’
neg_edge_index=neg_edge_index.cpu()
neg_edge_type=neg_edge_type.cpu()
neg_edge_score=neg_edge_score.cpu()
special_scores=special_scores.cpu()
unique_neg_type=neg_edge_type.unique()
unique_neg_type_num=len(unique_neg_type)
neg_edge_dict={}
special_scores_dict={}
special_scores_unique=special_scores.unique()
special_scores_unique_num=len(special_scores_unique)
special_scores_unique_np=special_scores_unique.numpy()
for i_type,type_ele in enumerate(unique_neg_type):
neg_edge_dict[type_ele.item()]=(neg_edge_index[neg_edge_type==type_ele,:].cpu()).numpy()
special_scores_dict[special_scores_unique_np[i]]=[]
all_result=[]
all_result_dic={}
global smp_id_count
global neg_edges_all
global score_lists_all
global u_visited_all
global parent_all
global time_stamps_all
global prev_time_stamps_all
global u_scc_colors_all
global smp_id_count_all
global smp_id_count_root_idx_all
global smp_id_count_curr_root_idx_all
global curr_root_idx_all
global num_cc_total
curr_root_idx=np.zeros([unique_neg_type_num],dtype=np.int64)
curr_root_idx.fill(-99999999)
max_num_cc_each=int(max_num_cc//unique_neg_type_num)
max_num_cc_each=max(max_num_cc_each,len(neg_edge_dict.keys()))
max_num_cc_each=max(max_num_cc_each,int(np.sqrt(num_nodes)))
max_num_cc_each=min(max_num_cc_each,num_nodes//10)
max_num_cc_each=max(max_score,max_num_cc_each)
max_score=int(np.sqrt(num_nodes))
max_score=max(max_score,len(neg_edge_dict.keys()))
max_score=min(max_score,num_nodes//10)
with tqdm(total=len(unique_neg_type)) as progress_bar:
progress_bar.set