关于“Text classification using reusable embeddings”的评价
3805 条评价
Neat lab idea, but the hacker_news.stories source is gone. It's now hacker_news.full with a different schema.
Scott T. · 已于 almost 2 years前审核
Shahid R. · 已于 almost 2 years前审核
sumit k. · 已于 almost 2 years前审核
Charles E. · 已于 almost 2 years前审核
Rajkumar T. · 已于 almost 2 years前审核
Eduardo P. · 已于 almost 2 years前审核
Adithya S. · 已于 almost 2 years前审核
The notobooks to be used in VertexAI are outdated
Alberto R. · 已于 almost 2 years前审核
Juan Jose G. · 已于 almost 2 years前审核
Big Query Table no longer exists
James C. · 已于 almost 2 years前审核
Could not load the BigQuery Dataset after multiple attempts
Kyle K. · 已于 almost 2 years前审核
could not setup a tensorflow 2.3 notebook and could not query the public dataset...
Egill V. · 已于 almost 2 years前审核
This lab does not work! ERROR: 404 Not found: Table bigquery-public-data:hacker_news.stories was not found in location US Location: US Job ID: d61d23c3-222f-4f52-bf57-62e20830b592
Fernando R. · 已于 almost 2 years前审核
Sergio R. · 已于 almost 2 years前审核
Alexis P. · 已于 almost 2 years前审核
Dipesh k. · 已于 almost 2 years前审核
Arun M. · 已于 almost 2 years前审核
BQ database provided code has errors I mentioned in another lab
Brett L. · 已于 almost 2 years前审核
Could not move past this step, got an error: On the Notebook instances page, click New Notebook > TensorFlow Enterprise > TensorFlow Enterprise 2.3 (with LTS) > Without GPUs.
Oleksandr P. · 已于 almost 2 years前审核
404 Not found: Table bigquery-public-data:hacker_news.stories was not found in location US need to use table `bigquery-public-data.hacker_news.full` and SAFE_OFFSET in sql-requests
Volodymyr S. · 已于 almost 2 years前审核
Arun P. · 已于 almost 2 years前审核
404 Not found: Table bigquery-public-data:hacker_news.stories was not found in location US
Oleg G. · 已于 almost 2 years前审核
Rushi S. · 已于 almost 2 years前审核
Nick G. · 已于 almost 2 years前审核
This lab's notebook should be fixed: - hacker_news dataset location is `bigquery-public-data.hacker_news.full` - The second query does not work because some urls cannot be sliced after split. To make it work, I've added a 3rd condition on the WHERE clausule AND `ARRAY_LENGTH(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.')) > 1`. - Even then, though, I could not run successfully the rest of cells.
Jordi V. · 已于 almost 2 years前审核
我们无法确保发布的评价来自已购买或已使用产品的消费者。评价未经 Google 核实。