We currently use SharePoint for some of our document-intensive, paperless processes and are able to have customized File Explorer Favorite links in Windows 7 that point to numerous document libraries where our users can simply drag/drop directly from Outlook or their desktop without having to visit the actual websites via Internet Explorer. This feature/ability has been a godsend for us and has greatly improved document management efficiency.

Thanks

For More You Can Check:

Instructional Video examples

Forum category: Discussions / General

Forum thread: Quick Access links in Windows 10 without renaming. ]]>

Forum category: Discussions / General

Forum thread: コピーブランド 通販 ]]>

Moed B exam and solution published.

Grades for Moed B + final course grade are available under the "exam" tab. Grades will processed by the Mazkirut soon.

Thanks,

Regev

Forum category: News / Course News

Forum thread: *** Moed B + Final grades published *** ]]>

For those who did Moed A, final grades have been calculated, and are available under the exam. They will be submitted to the Mazkirut soon.

Let me know if there are any issues or appeals.

Thanks!

Forum category: News / Course News

Forum thread: *** Final Moed A grades published *** ]]>

Forum category: Discussions / HW5

Forum thread: killed message on nova ]]>

I was stupidly multiplying var by I , thanks for that =) does go in seconds now

Forum category: Discussions / HW5

Forum thread: killed message on nova ]]>

Every mac comes with ssh preinstalled - so you just need to find it.

Maybe try in another terminal (try running ssh under: Terminal/bash/konsole/tcsh)

2. In HW5, if you use numpy just for the heavy part (the distances ||x_i - u_m|| ^ 2)

then the code runs in a few seconds, so there is no need to do anything special.

Forum category: Discussions / HW5

Forum thread: killed message on nova ]]>

Although it seem short, its cimplicated.

We had a test on this course about 1 week ago, and we have tests on other courses too.

Considering we didn't receive the grades for hw4,

Please consider an extension of few days, for the programming part only.

Thanks.

Forum category: Discussions / HW5

Forum thread: מועד הגשה ]]>

I'm getting 'ssh: No match'

Forum category: Discussions / HW5

Forum thread: killed message on nova ]]>

After connecting to Nova, you can ssh to one of them through Putty, using the command "ssh gauss-**.cs.tau.ac.il".

The password it requires is the same as the one you used to log in to Nova.

Forum category: Discussions / HW5

Forum thread: killed message on nova ]]>

I tried running also the old hw3 code which used to work fine and now also it gives a killed message at different points.

Anything to be done? Is it because of current overload on the servers?

Forum category: Discussions / HW5

Forum thread: killed message on nova ]]>

Forum category: Discussions / HW5

Forum thread: מועד הגשה ]]>

Forum category: Discussions / HW5

Forum thread: Programming Assignment ]]>

I'm battling the code for 3 days without any real progress.

Is there a point to submit an un-working code?

Forum category: Discussions / HW5

Forum thread: Programming Assignment ]]>

Forum category: Discussions / HW5

Forum thread: Q3 | log likelihood of parameters given data, or data given parameters ]]>

אם היינו רואים קודם את ציוני שב 4, אז היינו יודעים אם בכלל אפשר לוותר על הכנת התרגיל

Forum category: Discussions / HW5

Forum thread: מועד הגשה ]]>

Forum category: Discussions / HW5

Forum thread: מועד הגשה ]]>

Forum category: Discussions / HW5

Forum thread: מועד הגשה ]]>

1. Calculate $\log P(z_i = j, x_i)$ - do this by using the log of the expression for the pdf of normal directly. Let's call this number $b_{i,j}$.

2. Calculate $\log P(x_i) = \log(\sum_{j=1}^K P(z_i = j, x_i)) = log(\sum_{j=1}^K exp(b_{i,j}))$. This can be done by using the logsumexp function in numpy.

2. Calculate $\log P(z_i = j | x_i)$. By Bayes' law, this is equal to $\log P(z_i = j , x_i) - \log P(x_i)$, which uses quantities calculated in 1+2.

There will be no extra time - this exercise is short, with an extended deadline, and is non-mandatory (in the 5/6 sense). Note that the checker won't pick it up until Monday.

Forum category: Discussions / HW5

Forum thread: Programming Assignment ]]>

Forum category: Discussions / HW5

Forum thread: Q3 | log likelihood of parameters given data, or data given parameters ]]>

Can we get some directions on how to implement the calculations and some extra time to do it?

Thanks,

Ofer.

Forum category: Discussions / HW5

Forum thread: Programming Assignment ]]>

Each item x_i is a string of length 5, composed of two "2" characters, and three "0"/"1" characters.

Forum category: Discussions / HW5

Forum thread: Q3 ]]>

it says:

"What is the log-likelihood of the parameters

given the data x1, … , xn" ?

While there is no prior defined.

I can write

P(data | parameters)

Is it what you meant ?

Forum category: Discussions / HW5

Forum thread: Q3 | log likelihood of parameters given data, or data given parameters ]]>

תקופת המבחנים עמוסה, ונדמה שזה יוכל להקל גם על הבודק

תודה!

Forum category: Discussions / HW5

Forum thread: מועד הגשה ]]>

Forum category: Discussions / HW5

Forum thread: Is Q(teta,teta-t) the EM likelihood function? ]]>

Forum category: Discussions / HW5

Forum thread: Q3 ]]>

Forum category: News / Course News

Forum thread: Moed A exam and solution published ]]>

Forum category: Discussions / HW5

Forum thread: Is Q(teta,teta-t) the EM likelihood function? ]]>

Forum category: Discussions / HW5

Forum thread: Mistake in the scribes? ]]>

when i derived it i got (u-x)/sigma**2 (meaning when i derived only the log part).

did i derived wrong or are the scribes wrong?

Forum category: Discussions / HW5

Forum thread: Mistake in the scribes? ]]>

Forum category: Discussions / HW5

Forum thread: Q3 ]]>

Note that this is only the exam grade.

Solution will be posted soon.

Forum category: News / Course News

Forum thread: *** Moed A exam grades published *** ]]>

Forum category: News / Course News

Forum thread: HW3 returned ]]>

Forum category: Discussions / HW5

Forum thread: Do you have any estimation about how long the EM should run? ]]>

Forum category: Discussions / HW5

Forum thread: About the variance ]]>

Forum category: Discussions / HW5

Forum thread: hw5.py is missing ]]>

It seems that we can't reach the file hw5.py as described in the exercise

Thanks

Guy Oren

Forum category: Discussions / HW5

Forum thread: hw5.py is missing ]]>

The definition of the Q function includes these probabilities. We use the analytic for of Q to get the expressions that maximize it (by equating the derivative to 0). (12.9) is such an expression. It uses the posterior probabilities we calculated in the E-step.

Forum category: Discussions / General

Forum thread: EM for a GM ]]>

I mean, when i initialize the variance parameters i calculate them by calculating the cov-matrix of the sample, and then takes the average.

later when I update the covariances it gets 1000 bigger than the values they had when i initialized it. does it make any sense? it seems it happens because the update rule is depended on a dot multipication between two vectors (in that case of 784 attributes), which result in a very high numbers….

and it seems that the formula doesn't care, but it also seems it will not give us the correct variance values.

can you explain?

Forum category: Discussions / HW5

Forum thread: About the variance ]]>

Can we have an estimation of a feasible run time of the EM program?

Forum category: Discussions / HW5

Forum thread: Do you have any estimation about how long the EM should run? ]]>

On page 4 the EM algorithm for GMM is being described.

It is not clear what exactly happens with Q(teta,teta-t) - it seems we do not derive it when we evaluate the parameters at the M-step, but rather equation 12.7 which appears later. Is equation 12.7 equals to Q(teta,teta-t)? if it is - why? if not - then what exactly do we derive?

Forum category: Discussions / General

Forum thread: EM for a GM ]]>

Forum category: Discussions / Past Exams

Forum thread: 2013B+2013C ]]>

If i take a very large a , the resulting classifier will be almost hard-margin.

If i take a very small a, the resulting classifier will care very little about mistakes

Am I wrong in this?

Forum category: Discussions / Past Exams

Forum thread: 2013B+2013C ]]>

Forum category: Discussions / Past Exams

Forum thread: 2014 A Q2 ]]>

Forum category: Discussions / Past Exams

Forum thread: 2014 A Q2 ]]>

So when Ax is being created it learns through a class of hypothesis, right?

if the only hypothesis in the class is, for example, y = 1 if x<0 else -1

will both As you presented be different? it seems not (because there is only 1 hypothesis)

will they classify differently? yes, because if S={-1,1} and x0 = 2, we will get two different classifications for S by As and T(S) by Ats.

yet the answers says we should get the same classification.

Forum category: Discussions / Past Exams

Forum thread: 2014 A Q2 ]]>

\begin{align} T(S) = T(x_1),\ldots,T(x_n) \end{align}

Now, denote by $A_S$ the classifier trained on S, and by $A_S(x)$ its result when applied to a new point x.

You are asked if:

(2)\begin{equation} A_{T(S)}(T(x)) = A_S(x) \end{equation}

Forum category: Discussions / Past Exams

Forum thread: 2014 A Q2 ]]>

where h() is the tree build on the original samples and h'() is the tree build on the transformed sample.

the official answer is yes.

but i gave an example where it doesn't - where we have only one decision stump hypothesis, therefore the same tree will be build - yet it will give different classifications.

where am i wrong?

Forum category: Discussions / Past Exams

Forum thread: 2014 A Q2 ]]>

Thank you!

Forum category: Discussions / Past Exams

Forum thread: 2014/15b Q2.d ]]>

Forum category: Discussions / Past Exams

Forum thread: 2014 A Q2 ]]>

Forum category: Discussions / Past Exams

Forum thread: 2014B, Q3.2 ]]>

Maybe there is some assumption i don't know about H, that we learned during the semester and i should know about the H which we use in the tree building algorithm? I looked through the scribes yet only found that H is the class of "decision stumps", which as far as i know doesn't mean it can't have only one hypothesis…

Forum category: Discussions / Past Exams

Forum thread: 2014 A Q2 ]]>

If the points are the columns of X, then the PCs are the left singular vectors (columns of U).

Forum category: Discussions / Past Exams

Forum thread: 2015B Q4B ]]>

Forum category: Discussions / General

Forum thread: Naive bayes ]]>

Forum category: Discussions / Past Exams

Forum thread: 2015B Q4B ]]>

It seems that unlike the previous years, we did't practice it in excercises or seen it in any Tirgul…

Thanks!

Forum category: Discussions / General

Forum thread: Naive bayes ]]>

Forum category: Discussions / Past Exams

Forum thread: 2014/15b Q2.d ]]>

The example of recitation 12 wasn't classification - there was no label. It was an example of maximum likelihood estimation.

"Bayesian methods" is a very general term for methods who incorporate a Bayesian component. Naive Bayes is one example.

Forum category: Discussions / General

Forum thread: Naive bayes ]]>

We think that the co-variance matrix is X^T*X, and so it will not change. In the answers it seems as the co-variance matrix is X*X^T, in which case it does change.

Thanks

Forum category: Discussions / Past Exams

Forum thread: 2014/15b Q2.d ]]>

Is the "two bits" example of recitation 12 considered as an Naive Bayes approach?

Naive Bayes is whatever that done with dependency probability? Or just for x|y?

What r the difference between Naive Bayes and Bayesian Methods?

Thank u:)

Forum category: Discussions / General

Forum thread: Naive bayes ]]>