Our framework can be specialized to improve the two main criteria which have a trade-off: the tightness of the proper approximation and the sample complexity. For example, we can improve the tightness of our proper approximations by taking a subsequence of . This will make the sample complexity bound degrade, however, because *K*_{n} will grow faster. Table 2 shows the trade-offs between parameters in our model and the effectiveness of learning.

Table 2

criterion . | as K_{n} increases …. | as s increases …. |
---|---|---|

tightness of proper approximation | improves | improves |

sample complexity bound | degrades | degrades |

criterion . | as K_{n} increases …. | as s increases …. |
---|---|---|

tightness of proper approximation | improves | improves |

sample complexity bound | degrades | degrades |

This site uses cookies. By continuing to use our website, you are agreeing to our privacy policy.