关于“Text classification using reusable embeddings”的评价
正在加载…
未找到任何结果。

关于“Text classification using reusable embeddings”的评价

评论

Neat lab idea, but the hacker_news.stories source is gone. It's now hacker_news.full with a different schema.

Scott T. · 评论almost 2 years之前

Shahid R. · 评论almost 2 years之前

sumit k. · 评论almost 2 years之前

Charles E. · 评论almost 2 years之前

Rajkumar T. · 评论almost 2 years之前

Eduardo P. · 评论almost 2 years之前

Adithya S. · 评论almost 2 years之前

The notobooks to be used in VertexAI are outdated

Alberto R. · 评论almost 2 years之前

Juan Jose G. · 评论almost 2 years之前

Big Query Table no longer exists

James C. · 评论almost 2 years之前

Could not load the BigQuery Dataset after multiple attempts

Kyle K. · 评论almost 2 years之前

could not setup a tensorflow 2.3 notebook and could not query the public dataset...

Egill V. · 评论almost 2 years之前

This lab does not work! ERROR: 404 Not found: Table bigquery-public-data:hacker_news.stories was not found in location US Location: US Job ID: d61d23c3-222f-4f52-bf57-62e20830b592

Fernando R. · 评论almost 2 years之前

Sergio R. · 评论almost 2 years之前

Alexis P. · 评论almost 2 years之前

Dipesh k. · 评论almost 2 years之前

Arun M. · 评论almost 2 years之前

BQ database provided code has errors I mentioned in another lab

Brett L. · 评论almost 2 years之前

Could not move past this step, got an error: On the Notebook instances page, click New Notebook > TensorFlow Enterprise > TensorFlow Enterprise 2.3 (with LTS) > Without GPUs.

Oleksandr P. · 评论almost 2 years之前

404 Not found: Table bigquery-public-data:hacker_news.stories was not found in location US need to use table `bigquery-public-data.hacker_news.full` and SAFE_OFFSET in sql-requests

Volodymyr S. · 评论almost 2 years之前

Arun P. · 评论almost 2 years之前

404 Not found: Table bigquery-public-data:hacker_news.stories was not found in location US

Oleg G. · 评论almost 2 years之前

Rushi S. · 评论almost 2 years之前

Nick G. · 评论almost 2 years之前

This lab's notebook should be fixed: - hacker_news dataset location is `bigquery-public-data.hacker_news.full` - The second query does not work because some urls cannot be sliced after split. To make it work, I've added a 3rd condition on the WHERE clausule AND `ARRAY_LENGTH(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.')) > 1`. - Even then, though, I could not run successfully the rest of cells.

Jordi V. · 评论almost 2 years之前

我们无法确保发布的评价来自已购买或已使用产品的消费者。评价未经 Google 核实。