Sorted by New

Wiki Contributions


There are way more balanced discussion in EU compared to US, see or google “why norway does not recommend child vaccination”. Overall data demonstrate vaccination doesn’t slow spread and approx around ~40 years of age there is a flip in risks - you have higher chance of complications from vaccines than from COVID if you are younger. I believe it shall be personal choice which everyone weight for themselves and I am particularly concerned about “booster” philosophy - if you architect the systems which shall be re-build every 3-4 months you would be asked to go back to drawing boards to address longevity of the system. I don’t see that ask being projected into pharmaceutical companies. Also you are framing discussion negatively from the title, why people vaccinate voluntarily will be example of negative framing in other directions, more balanced and suitable for rational community discussion would be more balanced about pros and cons.

Thank you for this post. I will re-use and reference it - I have big plans for this topic. I want to explore/expand the decision tree on the effect of multiple vaccinations (boosters). I feel multiple vaccinations will work like this: single vaccination shall work, and it’s low risk. Two vaccinations - probability of vaccine effectiveness decreases because they are no longer independent, but they are not entirely dependent on each other either. I have the equation in mind, but I am too far from medical professionals. More importantly, I think while the probability of vaccine effectiveness decreases with each booster, the probability of risk factors sums up - while initials risks are small, risks accumulates and by second booster, it’s not that small. Any thoughts? I am open to ideas and can be convinced one way or another. Small note: Some diagrams are not visible iPad.

I build a very similar project - ( full source code available ) and it won the platinum prize on RedisHackathon 2021. 

My differentiator from connected papers is that I turn "strings into things" using external UMLC methathesaurus, so my nodes are industry-wide "concepts" and edges in the graph are a list of papers connecting the same concepts. 

Shall we collaborate and take both projects further? 

Thank you for this post very encouraging - I was thinking about applying to LTFF - I have all pre-requisites, now I feel it’s worth the try.

"partial credit" for getting close to the right answer" - this is where probability theory comes into play: "What is the likelihood of the enemy reading the message if it's intercepted in fullness".  

Also, the concept which helps me to think about how to bring abstract concepts into reality is the time or any notion of temporal dependencies, which probability theory missed at large for a while and now recovering: see nature article

 I would recommend structuring applied maths learning differently: start with Computational Beauty of Nature and then go deep in relevant areas + graph theory + graph algebra. Also a deep understanding of multi-objective optimisation techniques: NSGA-3, Pareto front/Pareto surface. 

I agree that nothing bets practical projects, but in modern life, you need to learn a lot of background information before jumping into the real world. There are plenty of ML projects and examples that are equivalent to the ToDo (12-factor app) in complexity - single component, boundaries clearly defined. The next steps in the real world would be - here is a payment platform with 270+ services and components, how does your AI/ML component fit into it? Who do you talk to to figure out the business value of the AI/ML component (business analysis/domain driven design)? How do you talk to your creative colleagues who are responsible for user experience in a productive manner ( i.e. jobs to be done )? 

I see this gap quite consistently and I am trying to address it on the technical side by building medium size AI/ML project with 3 pipelines and I think modern ML/AL professionals need to know things above before jumping into any real-world project. 

I think the most important point of teaching maths for data science is to build mental models in the data scientists head. It takes time and part of the process of learning maths and usually takes 2 years (or two years course in university). Bypassing that process backfires - startups raising money for AI/ML normally take 2 years before shipping the product. 

I think the mental model part is the most difficult to teach, but obviously, we are paid for specialised skills - like coding in python, hence everyone wants to jump into coding python without putting effort into learning maths and building proper mental models, which I think is wrong. The coding part is easy - I am comfortable with Octave/Matlab and Python or Rust. I would back Andrew Ng for choosing Octave for example - it's one of the most concise ways to get ML concepts working, although I disliked it when I was a student, then I tried to translate the following code into Python:

%% Main loop

  % perform uniform selection, then intermediate crossover
  chx=zeros(lamda,nvar);   % create storage space for new chrom
  sigx=zeros(lamda,nvar);  % create storage space for new sigmas
  for n=1:lamda            % loop over all population
    if(rand<xovr)      % roll the dice and see if we need to cross
      alpha=rand(1,nvar)*1.5-0.25; % create vector of random
                                   % then crossover
      chx(n,:)=chrom(mod(n-1,p)+1,:);   % just copy if not crossing