Rename lang_srv to language_server

This commit is contained in:
Ayaz Hafiz 2024-04-21 18:03:56 -05:00
parent 0d44ebb46e
commit 88c4a3af4e
No known key found for this signature in database
GPG key ID: 0E2A37416A25EF58
19 changed files with 4 additions and 4 deletions

View file

@ -0,0 +1,37 @@
[package]
name = "roc_language_server"
version = "0.0.1"
edition = "2021"
[[bin]]
name = "roc_language_server"
path = "src/server.rs"
[dev-dependencies]
expect-test = "1.4.1"
[dependencies]
roc_can = { path = "../compiler/can" }
roc_collections = { path = "../compiler/collections" }
roc_fmt = { path = "../compiler/fmt" }
roc_load = { path = "../compiler/load" }
roc_module = { path = "../compiler/module" }
roc_parse = { path = "../compiler/parse" }
roc_problem = { path = "../compiler/problem" }
roc_region = { path = "../compiler/region" }
roc_reporting = { path = "../reporting" }
roc_solve_problem = { path = "../compiler/solve_problem" }
roc_target = { path = "../compiler/roc_target" }
roc_types = { path = "../compiler/types" }
roc_packaging = {path = "../packaging"}
bumpalo.workspace = true
parking_lot.workspace = true
tower-lsp = "0.17.0"
tokio = { version = "1.20.1", features = [ "rt", "rt-multi-thread", "macros", "io-std" ] }
log.workspace = true
indoc.workspace=true
env_logger = "0.10.1"
futures.workspace = true

View file

@ -0,0 +1,98 @@
# roc_language_server
This is a basic language server for Roc.
Support for the following LSP features are provided:
- Inline diagnostics
- Hover to view type of value
- Go-to-definition
- <details><summary>Example</summary>
https://github.com/ayazhafiz/roc/assets/20735482/23a57d06-5b70-46f2-b0c4-5836eaec669b
</details>
- Note that go-to-definition for the builtins does not yet work.
- Go-to-definition for abilities resolves to their specialization, if one exists.
- <details><summary>Example</summary>
https://github.com/ayazhafiz/roc/assets/20735482/1ba98bf9-518b-4c47-b606-a6ce6767566f
</details>
- Formatting Roc files on save
- <details><summary>Example</summary>
https://github.com/ayazhafiz/roc/assets/20735482/fbbe4bc1-64af-4c7d-b633-d7761906df11
</details>
[Semantic highlighting](https://github.com/microsoft/vscode/wiki/Semantic-Highlighting-Overview#what-is-the-difference-between-syntax-and-semantic-highlighting) will be added soon. Additional features require
changes to the compiler infrastructure that are not yet available.
Note that the language server is a bit naïve:
- If you make a change in a dependency, you'll also need to make a change in
the dependents' files for the changes to be picked up.
- The language server will only operate on changes on save, auto-saving is recommended.
## Installing
The roc_language_server binary is included with the [nightly releases](https://github.com/roc-lang/roc/releases). We recommend using the same version of roc and roc_language_server.
### Building from source
Follow the [building from source](https://github.com/roc-lang/roc/blob/main/BUILDING_FROM_SOURCE.md) instructions for roc. Then run:
```
# do `nix develop` first if you're using nix!
cargo build --bin roc_language_server --release
```
This will give you the language server binary at:
```
target/release/roc_language_server
```
### Configuring in your editor
Please follow your editor's language server implementation's documentation to see how custom language servers should be configured.
#### [coc.nvim](https://github.com/neoclide/coc.nvim)
Add the following to your coc JSON configuration file:
```
{
"languageserver": {
"roc": {
"command": "<path to binary folder>/roc_language_server",
"filetypes": ["roc"]
}
}
}
```
If you're using coc.nvim and want to use the configuration above, be sure to also instruct your vim that `*.roc` files have roc filetype.
## Debug
If you want to debug the server, use [debug_server.sh](./debug_server.sh)
instead of the direct binary.
If you would like to enable debug logging set the `ROCLS_LOG` environment variable to `debug` or `trace` for even more logs.
eg: `ROCLS_LOG=debug`
## Testing
Tests use expect-test, which is a snapshot/expect testing framework.
If a change is made that requires updating the expect tests run `cargo test` confirm that the diff is correct, then run `UPDATE_EXPECT=1 cargo test` to update the contents of the files with the new output.
## Config
You can set the environment variables below to control the operation of the language.
`ROCLS_DEBOUNCE_MS`: Sets the amount of time to delay starting analysis of the document when a change comes in. This prevents starting pointless analysis while you are typing normally.
Default: `100`
`ROCLS_LATEST_DOC_TIMEOUT_MS`: Sets the timeout for waiting for an analysis of the latest document to be complete. If a request is sent that needs the latest version of the document to be analyzed, then it will wait up to this duration before just giving up.
Default: `5000`

View file

@ -0,0 +1,39 @@
# Improvements to the language server.
## Performance
- [ ] Implement some performance logging for actions like completion goto def hover etc
### Completion
Currently the way we handle documentation and type info for completion requires us to prform all the computation up front and has no caching. Documentation is also quite inneficient and likely requires a lot of repeated computation which could be slow in files with lots of doc comments. The language server allows us to defer getting the info for a completion until the item is actually selected in the editor, this could speed up completion requests.
We would need to profile this to see how performant it really is.
## Features
- [ ] Rename refactoring #HighPriority
- [ ] Show references #HighPriority
Initially this could just be within the current file and it could be expanded to multi file
Should have a lot in commmon with rename refactoring
- [ ] Completion within the import section
### Code Actions
- [ ] Create cases of when is block
- [ ] Destructure record
- [ ] Extract selection into it's own function (This one seems hard)
- [ ] Add function to exposed list
### Completion
- [ ] Completion of Tags #HighPriority
- [ ] Completion of Types inside signatures
- [ ] Completion of when is cases
- [ ] Completion of record fields
- [ ] During destructuring
- [ ] When creating records
- [ ] When describing records inside function params
- [ ] Completion of unimported vars that are exposed by modules within the project (will need to have appropriate indicator and ranking so as not to be annoying)

View file

@ -0,0 +1,5 @@
#!/usr/bin/bash
SCRIPT_DIR=$(cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd)
RUST_LOG=debug
${SCRIPT_DIR}/../../target/debug/roc_language_server "$@" 2> /tmp/roc_language_server.err

View file

@ -0,0 +1,390 @@
use std::{
collections::HashMap,
path::{Path, PathBuf},
sync::Arc,
};
use bumpalo::Bump;
use parking_lot::Mutex;
use roc_can::{abilities::AbilitiesStore, expr::Declarations};
use roc_collections::{MutMap, MutSet, VecMap};
use roc_load::{docs::ModuleDocumentation, CheckedModule, LoadedModule};
use roc_module::symbol::{Interns, ModuleId, Symbol};
use roc_packaging::cache::{self, RocCacheDir};
use roc_region::all::LineInfo;
use roc_reporting::report::RocDocAllocator;
use roc_solve_problem::TypeError;
use roc_types::subs::{Subs, Variable};
use tower_lsp::lsp_types::{Diagnostic, SemanticTokenType, Url};
mod analysed_doc;
mod completion;
mod parse_ast;
mod semantic_tokens;
mod tokens;
mod utils;
use crate::convert::diag::{IntoLspDiagnostic, ProblemFmt};
pub(crate) use self::analysed_doc::{AnalyzedDocument, DocInfo};
use self::{analysed_doc::ModuleIdToUrl, tokens::Token};
pub const HIGHLIGHT_TOKENS_LEGEND: &[SemanticTokenType] = Token::LEGEND;
/// Contains hashmaps of info about all modules that were analyzed
#[derive(Debug)]
pub(super) struct ModulesInfo {
subs: Mutex<HashMap<ModuleId, Subs>>,
exposed: HashMap<ModuleId, Arc<Vec<(Symbol, Variable)>>>,
docs: VecMap<ModuleId, ModuleDocumentation>,
}
impl ModulesInfo {
fn with_subs<F, A>(&self, mod_id: &ModuleId, f: F) -> Option<A>
where
F: FnOnce(&mut Subs) -> A,
{
self.subs.lock().get_mut(mod_id).map(f)
}
/// Transforms some of the raw data from the analysis into a state that is
/// more useful during processes like completion.
fn from_analysis(
exposes: MutMap<ModuleId, Vec<(Symbol, Variable)>>,
typechecked: &MutMap<ModuleId, CheckedModule>,
docs_by_module: VecMap<ModuleId, ModuleDocumentation>,
) -> ModulesInfo {
// We wrap this in Arc because later we will go through each module's imports and
// store the full list of symbols that each imported module exposes.
// example: A imports B. B exposes [add, multiply, divide] and A will store a reference to that list.
let exposed = exposes
.into_iter()
.map(|(module_id, symbols)| (module_id, Arc::new(symbols)))
.collect::<HashMap<_, _>>();
// Combine the subs from all modules
let all_subs = Mutex::new(
typechecked
.iter()
.map(|(module_id, checked_module)| {
(*module_id, checked_module.solved_subs.0.clone())
})
.collect::<HashMap<_, _>>(),
);
ModulesInfo {
subs: all_subs,
exposed,
docs: docs_by_module,
}
}
}
#[derive(Debug, Clone)]
pub(super) struct AnalyzedModule {
exposed_imports: Vec<(Symbol, Variable)>,
/// imports are grouped by which module they come from
imports: HashMap<ModuleId, Arc<Vec<(Symbol, Variable)>>>,
module_id: ModuleId,
interns: Interns,
subs: Subs,
abilities: AbilitiesStore,
declarations: Declarations,
modules_info: Arc<ModulesInfo>,
// We need this because ModuleIds are not stable between compilations, so a ModuleId visible to
// one module may not be true global to the language server.
module_id_to_url: ModuleIdToUrl,
}
#[derive(Debug, Clone)]
pub struct AnalysisResult {
module: Option<AnalyzedModule>,
diagnostics: Vec<Diagnostic>,
}
pub(crate) fn global_analysis(doc_info: DocInfo) -> Vec<AnalyzedDocument> {
let fi = doc_info.url.to_file_path().unwrap();
let src_dir = find_src_dir(&fi).to_path_buf();
let arena = Bump::new();
let loaded = roc_load::load_and_typecheck_str(
&arena,
fi,
&doc_info.source,
src_dir,
roc_target::Target::LinuxX64,
roc_load::FunctionKind::LambdaSet,
roc_reporting::report::RenderTarget::Generic,
RocCacheDir::Persistent(cache::roc_cache_dir().as_path()),
roc_reporting::report::DEFAULT_PALETTE,
);
let module = match loaded {
Ok(module) => module,
Err(problem) => {
let all_problems = problem
.into_lsp_diagnostic(&())
.into_iter()
.collect::<Vec<_>>();
let analyzed_document = AnalyzedDocument {
doc_info,
analysis_result: AnalysisResult {
module: None,
diagnostics: all_problems,
},
};
return vec![analyzed_document];
}
};
let mut documents = vec![];
let LoadedModule {
interns,
mut can_problems,
mut type_problems,
mut declarations_by_id,
sources,
mut typechecked,
solved,
abilities_store,
exposed_imports,
mut imports,
exposes,
docs_by_module,
..
} = module;
let mut root_module = Some(RootModule {
subs: solved.into_inner(),
abilities_store,
});
let exposed_imports = resolve_exposed_imports(exposed_imports, &exposes);
let modules_info = Arc::new(ModulesInfo::from_analysis(
exposes,
&typechecked,
docs_by_module,
));
let mut builder = AnalyzedDocumentBuilder {
interns: &interns,
module_id_to_url: module_id_to_url_from_sources(&sources),
can_problems: &mut can_problems,
type_problems: &mut type_problems,
declarations_by_id: &mut declarations_by_id,
typechecked: &mut typechecked,
root_module: &mut root_module,
exposed_imports,
imports: &mut imports,
modules_info,
};
for (module_id, (path, source)) in sources {
let doc = builder.build_document(path, source, module_id, doc_info.version);
documents.push(doc);
}
documents
}
/// Take the exposed imports from each module, lookup the symbol within that module's list of
/// exposed symbols and then get the type info for that import.
/// example: `import {Task.{await}}`. `await` is an exposed_import, so we need to lookup its type info.
fn resolve_exposed_imports(
exposed_imports: MutMap<ModuleId, MutMap<Symbol, roc_region::all::Region>>,
exposes: &MutMap<ModuleId, Vec<(Symbol, Variable)>>,
) -> HashMap<ModuleId, Vec<(Symbol, Variable)>> {
let get_exposed_symbol_info = |symbol: &Symbol, module_id: &ModuleId| {
exposes
.get(module_id)?
.iter()
.find(|(symb, _)| symb == symbol)
};
exposed_imports
.into_iter()
.map(|(module_id, symbols)| {
(
module_id,
symbols
.into_iter()
.filter_map(|(symbol, _)| get_exposed_symbol_info(&symbol, &module_id))
.cloned()
.collect::<Vec<_>>(),
)
})
.collect()
}
fn find_src_dir(path: &Path) -> &Path {
path.parent().unwrap_or(path)
}
fn _find_parent_git_repo(path: &Path) -> Option<&Path> {
let mut path = path;
loop {
if path.join(".git").exists() {
return Some(path);
}
path = path.parent()?;
}
}
fn module_id_to_url_from_sources(sources: &MutMap<ModuleId, (PathBuf, Box<str>)>) -> ModuleIdToUrl {
sources
.iter()
.map(|(module_id, (path, _))| {
let url = path_to_url(path);
(*module_id, url)
})
.collect()
}
fn path_to_url(path: &Path) -> Url {
if path.is_relative() {
// Make it <tmpdir>/path
let tmpdir = std::env::temp_dir();
Url::from_file_path(tmpdir.join(path)).unwrap()
} else {
Url::from_file_path(path).unwrap()
}
}
struct RootModule {
subs: Subs,
abilities_store: AbilitiesStore,
}
struct AnalyzedDocumentBuilder<'a> {
interns: &'a Interns,
module_id_to_url: ModuleIdToUrl,
can_problems: &'a mut MutMap<ModuleId, Vec<roc_problem::can::Problem>>,
type_problems: &'a mut MutMap<ModuleId, Vec<TypeError>>,
declarations_by_id: &'a mut MutMap<ModuleId, Declarations>,
typechecked: &'a mut MutMap<ModuleId, CheckedModule>,
root_module: &'a mut Option<RootModule>,
imports: &'a mut MutMap<ModuleId, MutSet<ModuleId>>,
exposed_imports: HashMap<ModuleId, Vec<(Symbol, Variable)>>,
modules_info: Arc<ModulesInfo>,
}
impl<'a> AnalyzedDocumentBuilder<'a> {
fn build_document(
&mut self,
path: PathBuf,
source: Box<str>,
module_id: ModuleId,
version: i32,
) -> AnalyzedDocument {
let subs;
let abilities;
let declarations;
//lookup the type info for each import from the module where it was exposed
let this_imports = self.imports.remove(&module_id).unwrap_or_default();
let imports = self.get_symbols_for_imports(this_imports);
let exposed_imports = self.exposed_imports.remove(&module_id).unwrap_or_default();
if let Some(m) = self.typechecked.remove(&module_id) {
subs = m.solved_subs.into_inner();
abilities = m.abilities_store;
declarations = m.decls;
} else {
let rm = self.root_module.take().unwrap();
subs = rm.subs;
abilities = rm.abilities_store;
declarations = self.declarations_by_id.remove(&module_id).unwrap();
}
let analyzed_module = AnalyzedModule {
exposed_imports,
imports,
subs,
abilities,
declarations,
module_id,
modules_info: self.modules_info.clone(),
interns: self.interns.clone(),
module_id_to_url: self.module_id_to_url.clone(),
};
let line_info = LineInfo::new(&source);
let diagnostics = self.build_diagnostics(&path, &source, &line_info, module_id);
AnalyzedDocument {
doc_info: DocInfo {
url: path_to_url(&path),
line_info,
source: source.into(),
version,
},
analysis_result: AnalysisResult {
module: Some(analyzed_module),
diagnostics,
},
}
}
///Gets the exposed symbols, and type info for each imported module
fn get_symbols_for_imports(
&mut self,
imports: MutSet<ModuleId>,
) -> HashMap<ModuleId, Arc<Vec<(Symbol, Variable)>>> {
imports
.into_iter()
.map(|id| {
(
id,
self.modules_info
.exposed
.get(&id)
.unwrap_or(&Arc::new(vec![]))
.clone(),
)
})
.collect::<HashMap<_, _>>()
}
fn build_diagnostics(
&mut self,
source_path: &Path,
source: &str,
line_info: &LineInfo,
module_id: ModuleId,
) -> Vec<Diagnostic> {
let lines: Vec<_> = source.lines().collect();
let alloc = RocDocAllocator::new(&lines, module_id, self.interns);
let mut all_problems = Vec::new();
let fmt = ProblemFmt {
alloc: &alloc,
line_info,
path: source_path,
};
let can_problems = self.can_problems.remove(&module_id).unwrap_or_default();
let type_problems = self.type_problems.remove(&module_id).unwrap_or_default();
for can_problem in can_problems {
if let Some(diag) = can_problem.into_lsp_diagnostic(&fmt) {
all_problems.push(diag);
}
}
for type_problem in type_problems {
if let Some(diag) = type_problem.into_lsp_diagnostic(&fmt) {
all_problems.push(diag);
}
}
all_problems
}
}

View file

@ -0,0 +1,309 @@
use log::{debug, info};
use std::collections::HashMap;
use bumpalo::Bump;
use roc_module::symbol::{ModuleId, Symbol};
use roc_region::all::LineInfo;
use tower_lsp::lsp_types::{
CompletionItem, Diagnostic, GotoDefinitionResponse, Hover, HoverContents, LanguageString,
Location, MarkedString, Position, Range, SemanticTokens, SemanticTokensResult, TextEdit, Url,
};
use crate::{
analysis::completion::{field_completion, get_completion_items, get_module_completion_items},
convert::{ToRange, ToRocPosition},
};
use super::{
parse_ast::Ast,
semantic_tokens::arrange_semantic_tokens,
utils::{format_var_type, is_roc_identifier_char},
AnalysisResult, AnalyzedModule,
};
pub(super) type ModuleIdToUrl = HashMap<ModuleId, Url>;
#[derive(Debug, Clone)]
pub struct AnalyzedDocument {
pub doc_info: DocInfo,
pub analysis_result: AnalysisResult,
}
#[derive(Debug, Clone)]
pub struct DocInfo {
pub url: Url,
pub line_info: LineInfo,
pub source: String,
pub version: i32,
}
impl DocInfo {
pub fn new(url: Url, source: String, version: i32) -> Self {
Self {
url,
line_info: LineInfo::new(&source),
source,
version,
}
}
#[cfg(debug_assertions)]
#[allow(unused)]
fn debug_log_prefix(&self, offset: usize) {
debug!("Prefix source: {:?}", self.source);
let last_few = self.source.get(offset - 5..offset + 5).unwrap();
let (before, after) = last_few.split_at(5);
debug!(
"Starting to get completion items at offset: {:?} content: '{:?}|{:?}'",
offset, before, after
);
}
fn whole_document_range(&self) -> Range {
let start = Position::new(0, 0);
let end = Position::new(self.line_info.num_lines(), 0);
Range::new(start, end)
}
pub fn get_prefix_at_position(&self, position: Position) -> String {
let position = position.to_roc_position(&self.line_info);
let offset = position.offset as usize;
let source = &self.source.as_bytes()[..offset];
let symbol_len = source
.iter()
.rev()
.take_while(|&a| is_roc_identifier_char(&(*a as char)))
.count();
let symbol = &self.source[offset - symbol_len..offset];
String::from(symbol)
}
pub fn format(&self) -> Option<Vec<TextEdit>> {
let source = &self.source;
let arena = &Bump::new();
let ast = Ast::parse(arena, source).ok()?;
let fmt = ast.fmt();
if source == fmt.as_str() {
None
} else {
let range = self.whole_document_range();
let text_edit = TextEdit::new(range, fmt.to_string().to_string());
Some(vec![text_edit])
}
}
pub fn semantic_tokens(&self) -> Option<SemanticTokensResult> {
let source = &self.source;
let arena = &Bump::new();
let ast = Ast::parse(arena, source).ok()?;
let tokens = ast.semantic_tokens();
let data = arrange_semantic_tokens(tokens, &self.line_info);
Some(SemanticTokensResult::Tokens(SemanticTokens {
result_id: None,
data,
}))
}
}
impl AnalyzedDocument {
pub fn url(&self) -> &Url {
&self.doc_info.url
}
fn line_info(&self) -> &LineInfo {
&self.doc_info.line_info
}
fn module(&self) -> Option<&AnalyzedModule> {
self.analysis_result.module.as_ref()
}
fn location(&self, range: Range) -> Location {
Location {
uri: self.doc_info.url.clone(),
range,
}
}
pub fn type_checked(&self) -> bool {
self.analysis_result.module.is_some()
}
pub fn diagnostics(&self) -> Vec<Diagnostic> {
self.analysis_result.diagnostics.clone()
}
pub fn symbol_at(&self, position: Position) -> Option<Symbol> {
let line_info = self.line_info();
let position = position.to_roc_position(line_info);
let AnalyzedModule {
declarations,
abilities,
..
} = self.module()?;
let found_symbol =
roc_can::traverse::find_closest_symbol_at(position, declarations, abilities)?;
Some(found_symbol.implementation_symbol())
}
pub fn hover(&self, position: Position) -> Option<Hover> {
let line_info = self.line_info();
let pos = position.to_roc_position(line_info);
let AnalyzedModule {
subs,
declarations,
module_id,
interns,
modules_info,
..
} = self.module()?;
let (region, var) = roc_can::traverse::find_closest_type_at(pos, declarations)?;
//TODO:Can this be integrated into find closest type? is it even worth it?
let docs_opt = self
.symbol_at(position)
.and_then(|symb| modules_info.docs.get(module_id)?.get_doc_for_symbol(&symb));
let type_str = format_var_type(var, &mut subs.clone(), module_id, interns);
let range = region.to_range(self.line_info());
let type_content = MarkedString::LanguageString(LanguageString {
language: "roc".to_string(),
value: type_str,
});
let content = vec![Some(type_content), docs_opt.map(MarkedString::String)]
.into_iter()
.flatten()
.collect::<Vec<_>>();
Some(Hover {
contents: HoverContents::Array(content),
range: Some(range),
})
}
pub fn definition(&self, symbol: Symbol) -> Option<GotoDefinitionResponse> {
let AnalyzedModule { declarations, .. } = self.module()?;
let found_declaration = roc_can::traverse::find_declaration(symbol, declarations)?;
let range = found_declaration.region().to_range(self.line_info());
Some(GotoDefinitionResponse::Scalar(self.location(range)))
}
pub(crate) fn module_url(&self, module_id: ModuleId) -> Option<Url> {
self.module()?.module_id_to_url.get(&module_id).cloned()
}
pub fn completion_items(
&self,
position: Position,
latest_doc: &DocInfo,
) -> Option<Vec<CompletionItem>> {
let symbol_prefix = latest_doc.get_prefix_at_position(position);
debug!(
"Starting to get completion items for prefix: {:?} docVersion:{:?}",
symbol_prefix, latest_doc.version
);
let len_diff = latest_doc.source.len() as i32 - self.doc_info.source.len() as i32;
//We offset the position because we need the position to be in the correct scope in the most recently parsed version of the source. The quick and dirty method is to just remove the difference in length between the source files from the offset. This could cause issues, but is very easy
//TODO: this is kind of a hack and should be removed once we can do some minimal parsing without full type checking
let mut position = position.to_roc_position(&latest_doc.line_info);
position.offset = (position.offset as i32 - len_diff - 1) as u32;
debug!("Completion offset: {:?}", position.offset);
let AnalyzedModule {
module_id,
interns,
subs,
declarations,
exposed_imports,
imports,
modules_info,
..
} = self.module()?;
let is_field_or_module_completion = symbol_prefix.contains('.');
if is_field_or_module_completion {
// If the second to last section is capitalised we know we are completing a
// module inside an import of a module, e.g.: My.Module.function
let is_module_completion = symbol_prefix
.split('.')
.nth_back(1) // second to last
.and_then(|str| str.chars().nth(0).map(|c| c.is_uppercase()))
.unwrap_or(false);
if is_module_completion {
info!("Getting module dot completion...");
Some(get_module_completion_items(
symbol_prefix,
interns,
imports,
modules_info,
true,
))
} else {
info!("Getting record dot completion...");
field_completion(
position,
symbol_prefix,
declarations,
interns,
&mut subs.clone(),
module_id,
)
}
} else {
let is_module_or_type_completion = symbol_prefix
.chars()
.nth(0)
.map_or(false, |c| c.is_uppercase());
if is_module_or_type_completion {
info!("Getting module completion...");
let completions = get_module_completion_items(
symbol_prefix,
interns,
imports,
modules_info,
true,
);
Some(completions)
} else {
info!("Getting variable completion...");
let completions = get_completion_items(
position,
symbol_prefix,
declarations,
&mut subs.clone(),
module_id,
interns,
modules_info.docs.get(module_id),
exposed_imports,
);
Some(completions)
}
}
}
}

View file

@ -0,0 +1,397 @@
use std::{collections::HashMap, sync::Arc};
use log::{debug, warn};
use roc_can::{expr::Declarations, traverse::Visitor};
use roc_collections::MutMap;
use roc_load::docs::{DocDef, ModuleDocumentation};
use roc_module::symbol::{Interns, ModuleId, Symbol};
use roc_region::all::Position;
use roc_types::{
subs::{Subs, Variable},
types::Alias,
};
use tower_lsp::lsp_types::{self, CompletionItem, CompletionItemKind};
use self::visitor::CompletionVisitor;
use super::{utils::format_var_type, ModulesInfo};
mod formatting;
mod visitor;
fn get_completions(
position: Position,
decls: &Declarations,
prefix: String,
interns: &Interns,
) -> Vec<(Symbol, Variable)> {
let mut visitor = CompletionVisitor {
position,
found_declarations: Vec::new(),
interns,
prefix,
};
visitor.visit_decls(decls);
visitor.found_declarations
}
#[allow(clippy::too_many_arguments)]
/// Walks through declarations that would be accessible from the provided
/// position adding them to a list of completion items until all accessible
/// declarations have been fully explored.
pub fn get_completion_items(
position: Position,
prefix: String,
decls: &Declarations,
subs: &mut Subs,
module_id: &ModuleId,
interns: &Interns,
docs: Option<&ModuleDocumentation>,
exposed_imports: &[(Symbol, Variable)],
) -> Vec<CompletionItem> {
let mut completions = get_completions(position, decls, prefix, interns);
completions.extend(exposed_imports);
debug!("extended with:{:#?}", exposed_imports);
make_completion_items(subs, module_id, interns, docs, completions)
}
pub(super) fn get_module_completion_items(
prefix: String,
interns: &Interns,
imported_modules: &HashMap<ModuleId, Arc<Vec<(Symbol, Variable)>>>,
modules_info: &ModulesInfo,
just_modules: bool,
) -> Vec<CompletionItem> {
let module_completions = imported_modules
.iter()
.flat_map(|(mod_id, exposed_symbols)| {
let mod_name = mod_id.to_ident_str(interns).to_string();
// Completion for modules themselves
if mod_name.starts_with(&prefix) {
let item = CompletionItem {
label: mod_name.clone(),
kind: Some(CompletionItemKind::MODULE),
documentation: Some(formatting::module_documentation(
formatting::DescriptionsType::Exposes,
mod_id,
interns,
exposed_symbols,
modules_info.docs.get(mod_id),
modules_info,
)),
..Default::default()
};
vec![item]
// Complete dot completions for module exports
} else if prefix.starts_with(&(mod_name + ".")) {
get_module_exposed_completion(
exposed_symbols,
modules_info,
mod_id,
modules_info.docs.get(mod_id),
interns,
)
} else {
vec![]
}
});
if just_modules {
return module_completions.collect();
}
module_completions.collect()
}
fn get_module_exposed_completion(
exposed_symbols: &[(Symbol, Variable)],
modules_info: &ModulesInfo,
mod_id: &ModuleId,
docs: Option<&ModuleDocumentation>,
interns: &Interns,
) -> Vec<CompletionItem> {
let mut completion_docs = docs.map_or(Default::default(), |docs| {
get_completion_docs(exposed_symbols, docs)
});
exposed_symbols
.iter()
.map(|(symbol, var)| {
// We need to fetch the subs for the module that is exposing what we
// are trying to complete because that will have the type info we need.
modules_info
.with_subs(mod_id, |subs| {
make_completion_item(
subs,
mod_id,
interns,
completion_docs.remove(symbol),
symbol.as_str(interns).to_string(),
*var,
)
})
.expect("Couldn't find subs for module during completion.")
})
.collect::<Vec<_>>()
}
/// Efficiently walks the list of docs collecting the docs for completions as we go.
/// Should be faster than re-walking for every completion.
fn get_completion_docs(
completions: &[(Symbol, Variable)],
docs: &ModuleDocumentation,
) -> HashMap<Symbol, String> {
let mut symbols = completions
.iter()
.map(|(symbol, _)| symbol)
.collect::<Vec<_>>();
docs.entries
.iter()
.filter_map(|doc| match doc {
roc_load::docs::DocEntry::DocDef(DocDef { docs, symbol, .. }) => {
let docs_str = docs.as_ref().map(|str| str.trim().to_string())?;
let (index, _symbol) = symbols
.iter()
.enumerate()
.find(|(_index, symb)| symb == &&symbol)?;
symbols.swap_remove(index);
Some((*symbol, docs_str))
}
_ => None,
})
.collect()
}
/// Provides a list of completions for Type aliases within the scope.
///TODO: Use this when we know we are within a type definition
fn _alias_completions(
aliases: &MutMap<Symbol, (bool, Alias)>,
module_id: &ModuleId,
interns: &Interns,
) -> Vec<CompletionItem> {
aliases
.iter()
.filter(|(symbol, (_exposed, _alias))| &symbol.module_id() == module_id)
.map(|(symbol, (_exposed, _alias))| {
let name = symbol.as_str(interns).to_string();
CompletionItem {
label: name.clone(),
detail: Some(name + "we don't know how to print types "),
kind: Some(CompletionItemKind::CLASS),
..Default::default()
}
})
.collect()
}
fn make_completion_items(
subs: &mut Subs,
module_id: &ModuleId,
interns: &Interns,
docs: Option<&ModuleDocumentation>,
completions: Vec<(Symbol, Variable)>,
) -> Vec<CompletionItem> {
let mut completion_docs = docs.map_or(Default::default(), |mod_docs| {
get_completion_docs(&completions, mod_docs)
});
completions
.into_iter()
.map(|(symbol, var)| {
make_completion_item(
subs,
module_id,
interns,
completion_docs.remove(&symbol),
symbol.as_str(interns).to_string(),
var,
)
})
.collect()
}
fn make_completion_items_string(
subs: &mut Subs,
module_id: &ModuleId,
interns: &Interns,
completions: Vec<(String, Variable)>,
) -> Vec<CompletionItem> {
completions
.into_iter()
.map(|(symbol, var)| make_completion_item(subs, module_id, interns, None, symbol, var))
.collect()
}
fn make_completion_item(
subs: &mut Subs,
module_id: &ModuleId,
interns: &Interns,
docs_opt: Option<String>,
symbol_str: String,
var: Variable,
) -> CompletionItem {
let type_str = format_var_type(var, subs, module_id, interns);
let typ = match subs.get(var).content {
roc_types::subs::Content::Structure(var) => match var {
roc_types::subs::FlatType::Apply(_, _) => CompletionItemKind::FUNCTION,
roc_types::subs::FlatType::Func(_, _, _) => CompletionItemKind::FUNCTION,
roc_types::subs::FlatType::EmptyTagUnion
| roc_types::subs::FlatType::TagUnion(_, _) => CompletionItemKind::ENUM,
_ => CompletionItemKind::VARIABLE,
},
other => {
debug!(
"No specific completionKind for variable type: {:?} defaulting to 'Variable'",
other
);
CompletionItemKind::VARIABLE
}
};
CompletionItem {
label: symbol_str,
detail: Some(type_str),
kind: Some(typ),
documentation: docs_opt.map(|docs| {
lsp_types::Documentation::MarkupContent(lsp_types::MarkupContent {
kind: lsp_types::MarkupKind::Markdown,
value: docs,
})
}),
..Default::default()
}
}
/// E.g. a.b.c.d->{variable_name:"a", field:"d", middle_fields:["b","c"]}
struct RecFieldCompletion {
/// name of variable that is a record
variable_name: String,
field: String,
middle_fields: Vec<String>,
}
/// Finds the types of and names of all the fields of a record.
/// `var` should be a `Variable` that you know is of type record or else it will return an empty list.
fn find_record_fields(var: Variable, subs: &mut Subs) -> Vec<(String, Variable)> {
let content = subs.get(var);
match content.content {
roc_types::subs::Content::Structure(typ) => match typ {
roc_types::subs::FlatType::Record(fields, ext) => {
let field_types = fields.unsorted_iterator(subs, ext);
match field_types {
Ok(field) => field
.map(|a| (a.0.clone().into(), a.1.into_inner()))
.collect::<Vec<_>>(),
Err(err) => {
warn!("Error getting record field types for completion: {:?}", err);
vec![]
}
}
}
roc_types::subs::FlatType::Tuple(elems, ext) => {
let elems = elems.unsorted_iterator(subs, ext);
match elems {
Ok(elem) => elem.map(|(num, var)| (num.to_string(), var)).collect(),
Err(err) => {
warn!("Error getting tuple elems for completion: {:?}", err);
vec![]
}
}
}
_ => {
warn!(
"Trying to get field completion for a type that is not a record: {:?}",
typ
);
vec![]
}
},
roc_types::subs::Content::Error => {
//This is caused by typechecking our partially typed variable name causing the typechecking to be confused as the type of the parent variable
//TODO! ideally i could recover using some previous typecheck result that isn't broken
warn!("Variable type of record was of type 'error', cannot access field",);
vec![]
}
_ => {
warn!(
"Variable before field was unsupported type: {:?}",
subs.dbg(var)
);
vec![]
}
}
}
/// Splits a completion prefix for a field into its components.
/// E.g. a.b.c.d->{variable_name:"a",middle_fields:["b","c"],field:"d"}
fn get_field_completion_parts(symbol_prefix: &str) -> Option<RecFieldCompletion> {
let mut parts = symbol_prefix.split('.').collect::<Vec<_>>();
let field = parts.pop().unwrap_or("").to_string();
let variable_name = parts.remove(0).to_string();
// Now that we have the head and tail removed this is all the intermediate fields.
let middle_fields = parts.into_iter().map(ToString::to_string).collect();
Some(RecFieldCompletion {
variable_name,
field,
middle_fields,
})
}
pub fn field_completion(
position: Position,
symbol_prefix: String,
declarations: &Declarations,
interns: &Interns,
subs: &mut Subs,
module_id: &ModuleId,
) -> Option<Vec<CompletionItem>> {
let RecFieldCompletion {
variable_name,
field,
middle_fields,
} = get_field_completion_parts(&symbol_prefix)?;
debug!(
"Getting record field completions: variable: {:?} field: {:?} middle: {:?} ",
variable_name, field, middle_fields
);
// We get completions here, but all we really want is the info about the variable that
// is the first part of our record completion.
// We are completing the full name of the variable so we should only have one match.
let completion = get_completions(position, declarations, variable_name, interns)
.into_iter()
.map(|(symbol, var)| (symbol.as_str(interns).to_string(), var))
.next()?;
// If we have a type that has nested records we could have a completion prefix like: "var.field1.field2.fi".
// If the document isn't fully typechecked we won't know what the type of field2 is for us to offer
// completions based on it's fields. Instead we get the type of "var" and then the type of "field1" within
// var's type and then "field2" within field1's type etc etc, until we have the type of the record we are
// actually looking for field completions for.
let completion_record = middle_fields.iter().fold(completion, |state, chain_field| {
let fields_vars = find_record_fields(state.1, subs);
fields_vars
.into_iter()
.find(|type_field| chain_field == &type_field.0)
.unwrap_or(state)
});
let field_completions: Vec<_> = find_record_fields(completion_record.1, subs)
.into_iter()
.filter(|(str, _)| str.starts_with(&field.to_string()))
.collect();
let field_completions =
make_completion_items_string(subs, module_id, interns, field_completions);
Some(field_completions)
}

View file

@ -0,0 +1,66 @@
use roc_load::docs::ModuleDocumentation;
use roc_module::symbol::{Interns, ModuleId, Symbol};
use roc_types::subs::Variable;
use tower_lsp::lsp_types::{Documentation, MarkupContent, MarkupKind};
use crate::analysis::{utils::format_var_type, ModulesInfo};
fn get_module_exposed_list(
module_id: &ModuleId,
interns: &Interns,
modules_info: &ModulesInfo,
exposed: &[(Symbol, Variable)],
) -> Option<std::string::String> {
modules_info.with_subs(module_id, |subs| {
let items = exposed
.iter()
.map(|(symbol, var)| {
let var_str = format_var_type(*var, subs, module_id, interns);
format!("{0}: {1}", symbol.as_str(interns), var_str)
})
.collect::<Vec<_>>();
items.join("\n").to_string()
})
}
pub(super) enum DescriptionsType {
Exposes,
}
fn md_doc(val: String) -> Documentation {
Documentation::MarkupContent(MarkupContent {
kind: MarkupKind::Markdown,
value: val,
})
}
/// Generates a nicely formatted block of text for the completionitem documentation field.
pub(super) fn module_documentation(
description_type: DescriptionsType,
module_id: &ModuleId,
interns: &Interns,
exposed: &[(Symbol, Variable)],
module_docs: Option<&ModuleDocumentation>,
modules_info: &ModulesInfo,
) -> Documentation {
let exposed_string =
get_module_exposed_list(module_id, interns, modules_info, exposed).unwrap_or_default();
let module_doc = module_docs
.and_then(|docs| {
docs.entries.first().and_then(|first_doc| match first_doc {
roc_load::docs::DocEntry::ModuleDoc(str) => Some(str.clone().trim().to_string()),
_ => None,
})
})
.unwrap_or_default();
match description_type {
DescriptionsType::Exposes => md_doc(format!(
"{0}```roc\n{1}\n```",
module_doc + "\n",
exposed_string
)),
}
}

View file

@ -0,0 +1,254 @@
use log::trace;
use roc_can::{
def::Def,
expr::{ClosureData, Expr, WhenBranch},
pattern::{ListPatterns, Pattern, RecordDestruct, TupleDestruct},
traverse::{walk_decl, walk_def, walk_expr, DeclarationInfo, Visitor},
};
use roc_module::symbol::{Interns, Symbol};
use roc_region::all::{Loc, Position, Region};
use roc_types::subs::Variable;
pub(crate) struct CompletionVisitor<'a> {
pub(crate) position: Position,
pub(crate) found_declarations: Vec<(Symbol, Variable)>,
pub(crate) interns: &'a Interns,
pub(crate) prefix: String,
}
impl Visitor for CompletionVisitor<'_> {
fn should_visit(&mut self, region: Region) -> bool {
region.contains_pos(self.position)
}
fn visit_expr(&mut self, expr: &Expr, region: Region, var: Variable) {
if region.contains_pos(self.position) {
let mut res = self.expression_defs(expr);
self.found_declarations.append(&mut res);
walk_expr(self, expr, var);
}
}
fn visit_decl(&mut self, decl: DeclarationInfo<'_>) {
match decl {
DeclarationInfo::Value { loc_expr, .. }
| DeclarationInfo::Function {
loc_body: loc_expr, ..
}
| DeclarationInfo::Destructure { loc_expr, .. } => {
let res = self.decl_to_completion_item(&decl);
self.found_declarations.extend(res);
if loc_expr.region.contains_pos(self.position) {
walk_decl(self, decl);
};
}
_ => {
walk_decl(self, decl);
}
}
}
fn visit_def(&mut self, def: &Def) {
let sym_var_vec = self.extract_defs(def);
self.found_declarations.extend(sym_var_vec);
walk_def(self, def);
}
}
impl CompletionVisitor<'_> {
fn extract_defs(&mut self, def: &Def) -> Vec<(Symbol, Variable)> {
trace!("Completion begin");
def.pattern_vars
.iter()
.map(|(symbol, var)| (*symbol, *var))
.collect()
}
fn expression_defs(&self, expr: &Expr) -> Vec<(Symbol, Variable)> {
match expr {
Expr::When {
expr_var, branches, ..
} => self.when_is_expr(branches, expr_var),
Expr::Closure(ClosureData {
arguments,
loc_body,
..
}) => {
// if we are inside the closure complete it's vars
if loc_body.region.contains_pos(self.position) {
arguments
.iter()
.flat_map(|(var, _, pat)| self.patterns(&pat.value, var))
.collect()
} else {
vec![]
}
}
_ => vec![],
}
}
/// Extract any variables made available by the branch of a when_is expression that contains `self.position`
fn when_is_expr(
&self,
branches: &[WhenBranch],
expr_var: &Variable,
) -> Vec<(Symbol, Variable)> {
branches
.iter()
.flat_map(
|WhenBranch {
patterns, value, ..
}| {
if value.region.contains_pos(self.position) {
patterns
.iter()
.flat_map(|pattern| self.patterns(&pattern.pattern.value, expr_var))
.collect()
} else {
vec![]
}
},
)
.collect()
}
fn record_destructure(&self, destructs: &[Loc<RecordDestruct>]) -> Vec<(Symbol, Variable)> {
destructs
.iter()
.flat_map(|loc| match &loc.value.typ {
roc_can::pattern::DestructType::Required
| roc_can::pattern::DestructType::Optional(_, _) => {
vec![(loc.value.symbol, loc.value.var)]
}
roc_can::pattern::DestructType::Guard(var, pat) => self.patterns(&pat.value, var),
})
.collect()
}
fn tuple_destructure(&self, destructs: &[Loc<TupleDestruct>]) -> Vec<(Symbol, Variable)> {
destructs
.iter()
.flat_map(|loc| {
let (var, pattern) = &loc.value.typ;
self.patterns(&pattern.value, var)
})
.collect()
}
fn list_pattern(&self, list_elems: &ListPatterns, var: &Variable) -> Vec<(Symbol, Variable)> {
list_elems
.patterns
.iter()
.flat_map(|loc| self.patterns(&loc.value, var))
.collect()
}
fn tag_pattern(&self, arguments: &[(Variable, Loc<Pattern>)]) -> Vec<(Symbol, Variable)> {
arguments
.iter()
.flat_map(|(var, pat)| self.patterns(&pat.value, var))
.collect()
}
fn as_pattern(
&self,
as_pat: &Pattern,
as_symbol: Symbol,
var: &Variable,
) -> Vec<(Symbol, Variable)> {
// get the variables introduced within the pattern
let mut patterns = self.patterns(as_pat, var);
// add the "as" that wraps the whole pattern
patterns.push((as_symbol, *var));
patterns
}
/// Returns a list of symbols defined by this pattern.
/// `pattern_var`: Variable type of the entire pattern. This will be returned if
/// the pattern turns out to be an identifier.
fn patterns(
&self,
pattern: &roc_can::pattern::Pattern,
pattern_var: &Variable,
) -> Vec<(Symbol, Variable)> {
match pattern {
roc_can::pattern::Pattern::Identifier(symbol) => {
if self.is_match(symbol) {
vec![(*symbol, *pattern_var)]
} else {
vec![]
}
}
Pattern::AppliedTag { arguments, .. } => self.tag_pattern(arguments),
Pattern::UnwrappedOpaque { argument, .. } => {
self.patterns(&argument.1.value, &argument.0)
}
Pattern::List {
elem_var, patterns, ..
} => self.list_pattern(patterns, elem_var),
roc_can::pattern::Pattern::As(pat, symbol) => {
self.as_pattern(&pat.value, *symbol, pattern_var)
}
roc_can::pattern::Pattern::RecordDestructure { destructs, .. } => {
self.record_destructure(destructs)
}
roc_can::pattern::Pattern::TupleDestructure { destructs, .. } => {
self.tuple_destructure(destructs)
}
_ => vec![],
}
}
fn is_match(&self, symbol: &Symbol) -> bool {
symbol.as_str(self.interns).starts_with(&self.prefix)
}
fn decl_to_completion_item(&self, decl: &DeclarationInfo) -> Vec<(Symbol, Variable)> {
match decl {
DeclarationInfo::Value {
expr_var, pattern, ..
} => self.patterns(pattern, expr_var),
DeclarationInfo::Function {
expr_var,
pattern,
function,
loc_body,
..
} => {
let mut sym_var_vec = vec![];
// append the function declaration itself for recursive calls
sym_var_vec.extend(self.patterns(pattern, expr_var));
if loc_body.region.contains_pos(self.position) {
// also add the arguments if we are inside the function
let args = function
.value
.arguments
.iter()
.flat_map(|(var, _, pat)| self.patterns(&pat.value, var));
// we add in the pattern for the function declaration
sym_var_vec.extend(args);
trace!(
"Added function args to completion output =:{:#?}",
sym_var_vec
);
}
sym_var_vec
}
DeclarationInfo::Destructure {
loc_pattern,
expr_var,
..
} => self.patterns(&loc_pattern.value, expr_var),
DeclarationInfo::Expectation { .. } => vec![],
}
}
}

View file

@ -0,0 +1,59 @@
use bumpalo::Bump;
use roc_fmt::Buf;
use roc_parse::{
ast::{Defs, Module},
parser::SyntaxError,
};
use roc_region::all::Loc;
use self::format::FormattedAst;
use super::tokens::{IterTokens, Token};
mod format;
pub struct Ast<'a> {
arena: &'a Bump,
module: Module<'a>,
defs: Defs<'a>,
}
impl<'a> Ast<'a> {
pub fn parse(arena: &'a Bump, src: &'a str) -> Result<Ast<'a>, SyntaxError<'a>> {
use roc_parse::{
module::{module_defs, parse_header},
parser::Parser,
state::State,
};
let (module, state) = parse_header(arena, State::new(src.as_bytes()))
.map_err(|e| SyntaxError::Header(e.problem))?;
let (_, defs, _) = module_defs().parse(arena, state, 0).map_err(|(_, e)| e)?;
Ok(Ast {
module,
defs,
arena,
})
}
pub fn fmt(&self) -> FormattedAst<'a> {
let mut buf = Buf::new_in(self.arena);
roc_fmt::module::fmt_module(&mut buf, &self.module);
roc_fmt::def::fmt_defs(&mut buf, &self.defs, 0);
buf.fmt_end_of_file();
FormattedAst::new(buf)
}
pub fn semantic_tokens(&self) -> impl IntoIterator<Item = Loc<Token>> + '_ {
let header_tokens = self.module.iter_tokens(self.arena);
let body_tokens = self.defs.iter_tokens(self.arena);
header_tokens.into_iter().chain(body_tokens)
}
}

View file

@ -0,0 +1,21 @@
use roc_fmt::Buf;
pub struct FormattedAst<'a> {
buf: Buf<'a>,
}
impl<'a> FormattedAst<'a> {
pub(crate) fn new(buf: Buf<'a>) -> Self {
Self { buf }
}
pub fn as_str(&self) -> &str {
self.buf.as_str()
}
}
impl ToString for FormattedAst<'_> {
fn to_string(&self) -> String {
self.buf.as_str().to_owned()
}
}

View file

@ -0,0 +1,49 @@
use roc_region::all::{LineColumn, LineInfo, Loc};
use tower_lsp::lsp_types::SemanticToken;
use super::tokens::Token;
/// Encodes semantic tokens as described in the LSP specification.
/// See [the sample documentation](https://github.com/microsoft/vscode-extension-samples/blob/5ae1f7787122812dcc84e37427ca90af5ee09f14/semantic-tokens-sample/vscode.proposed.d.ts#L71-L128).
pub fn arrange_semantic_tokens(
tokens: impl IntoIterator<Item = Loc<Token>>,
line_info: &LineInfo,
) -> Vec<SemanticToken> {
let tokens = tokens.into_iter();
let (min, max) = tokens.size_hint();
let size_hint = max.unwrap_or(min);
let mut result = Vec::with_capacity(size_hint);
let mut last_line = 0;
let mut last_start = 0;
for Loc {
region,
value: token,
} in tokens
{
let length = region.len();
let LineColumn { line, column } = line_info.convert_pos(region.start());
let delta_line = line - last_line;
let delta_start = if delta_line == 0 {
column - last_start
} else {
column
};
result.push(SemanticToken {
delta_line,
delta_start,
length,
token_type: token as u32,
token_modifiers_bitset: 0,
});
last_line = line;
last_start = column;
}
result
}

View file

@ -0,0 +1,797 @@
use bumpalo::{
collections::{CollectIn, Vec as BumpVec},
vec as bumpvec, Bump,
};
use roc_module::called_via::{BinOp, UnaryOp};
use roc_parse::{
ast::{
AbilityImpls, AbilityMember, AssignedField, Collection, Defs, Expr, Header, Implements,
ImplementsAbilities, ImplementsAbility, ImplementsClause, Module, Pattern, PatternAs,
RecordBuilderField, Spaced, StrLiteral, Tag, TypeAnnotation, TypeDef, TypeHeader, ValueDef,
WhenBranch,
},
header::{
AppHeader, ExposedName, HostedHeader, ImportsEntry, InterfaceHeader, ModuleName,
PackageEntry, PackageHeader, PackageName, PlatformHeader, PlatformRequires, ProvidesTo, To,
TypedIdent,
},
ident::{Accessor, UppercaseIdent},
};
use roc_region::all::{Loc, Region};
use tower_lsp::lsp_types::SemanticTokenType;
macro_rules! tokens {
($($(#[$meta:meta])* $token:ident => $lsp_token:literal),* $(,)?) => {
pub enum Token {
$(
$(#[$meta])*
$token
),*
}
fn _non_redundant_lsp_tokens() {
match "" {
$($lsp_token => (),)*
_ => (),
}
}
impl Token {
pub const LEGEND: &'static [SemanticTokenType] = &[
$(SemanticTokenType::new($lsp_token)),*
];
}
}
}
// Try to use predefined values at
// https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#textDocument_semanticTokens
tokens! {
Module => "namespace",
Type => "type",
Ability => "interface",
#[allow(unused)]
TypeVariable => "typeParameter",
#[allow(unused)]
Parameter => "parameter",
Variable => "variable",
Field => "property",
Tag => "enumMember",
Function => "function",
Keyword => "keyword",
String => "string",
Number => "number",
Operator => "operator",
Comment => "comment",
}
fn onetoken(token: Token, region: Region, arena: &Bump) -> BumpVec<Loc<Token>> {
bumpvec![in arena; Loc::at(region, token)]
}
fn field_token(region: Region, arena: &Bump) -> BumpVec<Loc<Token>> {
onetoken(Token::Field, region, arena)
}
trait HasToken {
fn token(&self) -> Token;
}
impl<T: HasToken> HasToken for Spaced<'_, T> {
fn token(&self) -> Token {
self.item().token()
}
}
impl HasToken for ModuleName<'_> {
fn token(&self) -> Token {
Token::Module
}
}
impl HasToken for &str {
fn token(&self) -> Token {
if self.chars().next().unwrap().is_uppercase() {
Token::Type
} else {
Token::Variable
}
}
}
impl HasToken for ExposedName<'_> {
fn token(&self) -> Token {
self.as_str().token()
}
}
impl HasToken for PackageName<'_> {
fn token(&self) -> Token {
Token::Module
}
}
impl HasToken for StrLiteral<'_> {
fn token(&self) -> Token {
Token::String
}
}
impl HasToken for UppercaseIdent<'_> {
fn token(&self) -> Token {
Token::Type
}
}
impl HasToken for To<'_> {
fn token(&self) -> Token {
match self {
To::ExistingPackage(_) => Token::Module,
To::NewPackage(_) => Token::Module,
}
}
}
impl HasToken for BinOp {
fn token(&self) -> Token {
Token::Operator
}
}
impl HasToken for UnaryOp {
fn token(&self) -> Token {
Token::Operator
}
}
pub trait IterTokens {
// Use a vec until "impl trait in trait" is stabilized
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>>;
}
impl<T: HasToken> IterTokens for Loc<T> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
onetoken(self.value.token(), self.region, arena)
}
}
impl<T: IterTokens> IterTokens for Spaced<'_, T> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
self.item().iter_tokens(arena)
}
}
impl<T: IterTokens> IterTokens for Collection<'_, T> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
self.items
.iter()
.flat_map(|item| item.iter_tokens(arena))
.collect_in(arena)
}
}
impl<T: IterTokens> IterTokens for &[T] {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
self.iter()
.flat_map(|item| item.iter_tokens(arena))
.collect_in(arena)
}
}
impl<T: IterTokens, U: IterTokens> IterTokens for (T, U) {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
let (a, b) = self;
a.iter_tokens(arena)
.into_iter()
.chain(b.iter_tokens(arena))
.collect_in(arena)
}
}
impl IterTokens for Module<'_> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
let Self {
comments: _,
header,
} = self;
header.iter_tokens(arena)
}
}
impl IterTokens for Header<'_> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
match self {
Header::Interface(ih) => ih.iter_tokens(arena),
Header::App(app) => app.iter_tokens(arena),
Header::Package(pkg) => pkg.iter_tokens(arena),
Header::Platform(pf) => pf.iter_tokens(arena),
Header::Hosted(h) => h.iter_tokens(arena),
}
}
}
impl IterTokens for InterfaceHeader<'_> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
let Self {
before_name: _,
name,
exposes,
imports,
} = self;
(name.iter_tokens(arena).into_iter())
.chain(exposes.item.iter_tokens(arena))
.chain(imports.item.iter_tokens(arena))
.collect_in(arena)
}
}
impl IterTokens for AppHeader<'_> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
let Self {
before_name: _,
name,
packages,
imports,
provides,
} = self;
(name.iter_tokens(arena).into_iter())
.chain(packages.iter().flat_map(|p| p.item.iter_tokens(arena)))
.chain(imports.iter().flat_map(|i| i.item.iter_tokens(arena)))
.chain(provides.iter_tokens(arena))
.collect_in(arena)
}
}
impl IterTokens for PackageHeader<'_> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
let Self {
before_name: _,
name,
exposes,
packages,
} = self;
(name.iter_tokens(arena).into_iter())
.chain(exposes.item.iter_tokens(arena))
.chain(packages.item.iter_tokens(arena))
.collect_in(arena)
}
}
impl IterTokens for PlatformHeader<'_> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
let Self {
before_name: _,
name,
requires,
exposes,
packages,
imports,
provides,
} = self;
(name.iter_tokens(arena).into_iter())
.chain(requires.item.iter_tokens(arena))
.chain(exposes.item.iter_tokens(arena))
.chain(packages.item.iter_tokens(arena))
.chain(imports.item.iter_tokens(arena))
.chain(provides.item.iter_tokens(arena))
.collect_in(arena)
}
}
impl IterTokens for HostedHeader<'_> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
let Self {
before_name: _,
name,
exposes,
imports,
generates: _,
generates_with,
} = self;
(name.iter_tokens(arena).into_iter())
.chain(exposes.item.iter_tokens(arena))
.chain(imports.item.iter_tokens(arena))
.chain(generates_with.item.iter_tokens(arena))
.collect_in(arena)
}
}
impl IterTokens for Loc<Spaced<'_, ImportsEntry<'_>>> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
match self.value.item() {
ImportsEntry::Module(_module_name, names) => names.iter_tokens(arena),
ImportsEntry::Package(_pkg, _module_name, names) => names.iter_tokens(arena),
ImportsEntry::IngestedFile(_str, idents) => idents.iter_tokens(arena),
}
}
}
impl IterTokens for Loc<Spaced<'_, PackageEntry<'_>>> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
let PackageEntry {
shorthand: _,
spaces_after_shorthand: _,
package_name,
} = self.value.item();
package_name.iter_tokens(arena)
}
}
impl IterTokens for Loc<Spaced<'_, TypedIdent<'_>>> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
self.value.item().iter_tokens(arena)
}
}
impl IterTokens for TypedIdent<'_> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
let Self {
ident,
spaces_before_colon: _,
ann,
} = self;
(ident.iter_tokens(arena).into_iter())
.chain(ann.iter_tokens(arena))
.collect_in(arena)
}
}
impl IterTokens for ProvidesTo<'_> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
let Self {
provides_keyword: _,
entries,
types,
to_keyword: _,
to,
} = self;
(entries.iter_tokens(arena).into_iter())
.chain(types.iter().flat_map(|t| t.iter_tokens(arena)))
.chain(to.iter_tokens(arena))
.collect_in(arena)
}
}
impl IterTokens for PlatformRequires<'_> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
let Self { rigids, signature } = self;
(rigids.iter_tokens(arena).into_iter())
.chain(signature.iter_tokens(arena))
.collect_in(arena)
}
}
impl IterTokens for Loc<TypeAnnotation<'_>> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
match self.value {
TypeAnnotation::Function(params, ret) => (params.iter_tokens(arena).into_iter())
.chain(ret.iter_tokens(arena))
.collect_in(arena),
TypeAnnotation::Apply(_mod, _type, args) => args.iter_tokens(arena),
TypeAnnotation::BoundVariable(_) => onetoken(Token::Type, self.region, arena),
TypeAnnotation::As(ty, _, as_ty) => (ty.iter_tokens(arena).into_iter())
.chain(as_ty.iter_tokens(arena))
.collect_in(arena),
TypeAnnotation::Record { fields, ext } => (fields.iter_tokens(arena).into_iter())
.chain(ext.iter().flat_map(|t| t.iter_tokens(arena)))
.collect_in(arena),
TypeAnnotation::Tuple { elems, ext } => (elems.iter_tokens(arena).into_iter())
.chain(ext.iter().flat_map(|t| t.iter_tokens(arena)))
.collect_in(arena),
TypeAnnotation::TagUnion { tags, ext } => (tags.iter_tokens(arena).into_iter())
.chain(ext.iter().flat_map(|t| t.iter_tokens(arena)))
.collect_in(arena),
TypeAnnotation::Inferred => onetoken(Token::Type, self.region, arena),
TypeAnnotation::Wildcard => onetoken(Token::Type, self.region, arena),
TypeAnnotation::Where(ty, implements) => (ty.iter_tokens(arena).into_iter())
.chain(implements.iter_tokens(arena))
.collect_in(arena),
TypeAnnotation::SpaceBefore(ty, _) | TypeAnnotation::SpaceAfter(ty, _) => {
Loc::at(self.region, *ty).iter_tokens(arena)
}
TypeAnnotation::Malformed(_) => bumpvec![in arena;],
}
}
}
impl<T> IterTokens for Loc<AssignedField<'_, T>>
where
Loc<T>: IterTokens,
{
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
self.value.iter_tokens(arena)
}
}
impl<T> IterTokens for AssignedField<'_, T>
where
Loc<T>: IterTokens,
{
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
match self {
AssignedField::RequiredValue(field, _, ty)
| AssignedField::OptionalValue(field, _, ty) => (field_token(field.region, arena)
.into_iter())
.chain(ty.iter_tokens(arena))
.collect_in(arena),
AssignedField::LabelOnly(s) => s.iter_tokens(arena),
AssignedField::SpaceBefore(af, _) | AssignedField::SpaceAfter(af, _) => {
af.iter_tokens(arena)
}
AssignedField::Malformed(_) => bumpvec![in arena;],
}
}
}
impl IterTokens for Loc<Tag<'_>> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
self.value.iter_tokens(arena)
}
}
impl IterTokens for Tag<'_> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
match self {
Tag::Apply { name, args } => (onetoken(Token::Tag, name.region, arena).into_iter())
.chain(args.iter_tokens(arena))
.collect_in(arena),
Tag::SpaceBefore(t, _) | Tag::SpaceAfter(t, _) => t.iter_tokens(arena),
Tag::Malformed(_) => bumpvec![in arena;],
}
}
}
impl IterTokens for TypeHeader<'_> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
let Self { name, vars } = self;
(name.iter_tokens(arena).into_iter())
.chain(vars.iter().map(|v| v.with_value(Token::Type)))
.collect_in(arena)
}
}
impl IterTokens for Loc<ImplementsClause<'_>> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
self.value.iter_tokens(arena)
}
}
impl IterTokens for ImplementsClause<'_> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
let Self { var, abilities } = self;
(var.iter_tokens(arena).into_iter())
.chain(abilities.iter_tokens(arena))
.collect_in(arena)
}
}
impl IterTokens for Defs<'_> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
self.defs()
.flat_map(|item| match item {
Ok(type_def) => type_def.iter_tokens(arena),
Err(value_def) => value_def.iter_tokens(arena),
})
.collect_in(arena)
}
}
impl IterTokens for TypeDef<'_> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
match self {
TypeDef::Alias { header, ann } => (header.iter_tokens(arena).into_iter())
.chain(ann.iter_tokens(arena))
.collect_in(arena),
TypeDef::Opaque {
header,
typ,
derived,
} => (header.iter_tokens(arena).into_iter())
.chain(typ.iter_tokens(arena))
.chain(derived.iter().flat_map(|t| t.iter_tokens(arena)))
.collect_in(arena),
TypeDef::Ability {
header: TypeHeader { name, vars },
loc_implements,
members,
} => (onetoken(Token::Ability, name.region, arena).into_iter())
.chain(vars.iter().map(|v| v.with_value(Token::Type)))
.chain(loc_implements.iter_tokens(arena))
.chain(members.iter_tokens(arena))
.collect_in(arena),
}
}
}
impl IterTokens for Loc<ImplementsAbilities<'_>> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
self.value.iter_tokens(arena)
}
}
impl IterTokens for ImplementsAbilities<'_> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
match self {
ImplementsAbilities::Implements(impls) => impls.iter_tokens(arena),
ImplementsAbilities::SpaceBefore(i, _) | ImplementsAbilities::SpaceAfter(i, _) => {
i.iter_tokens(arena)
}
}
}
}
impl IterTokens for Loc<ImplementsAbility<'_>> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
self.value.iter_tokens(arena)
}
}
impl IterTokens for ImplementsAbility<'_> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
match self {
ImplementsAbility::ImplementsAbility { ability, impls } => {
(ability.iter_tokens(arena).into_iter())
.chain(impls.iter().flat_map(|i| i.iter_tokens(arena)))
.collect_in(arena)
}
ImplementsAbility::SpaceBefore(ia, _) | ImplementsAbility::SpaceAfter(ia, _) => {
ia.iter_tokens(arena)
}
}
}
}
impl IterTokens for Loc<AbilityImpls<'_>> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
self.value.iter_tokens(arena)
}
}
impl IterTokens for AbilityImpls<'_> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
match self {
AbilityImpls::AbilityImpls(fields) => fields.iter_tokens(arena),
AbilityImpls::SpaceBefore(ai, _) | AbilityImpls::SpaceAfter(ai, _) => {
ai.iter_tokens(arena)
}
}
}
}
impl IterTokens for Loc<Implements<'_>> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
match self.value {
Implements::Implements => onetoken(Token::Keyword, self.region, arena),
Implements::SpaceBefore(i, _) | Implements::SpaceAfter(i, _) => {
Loc::at(self.region, *i).iter_tokens(arena)
}
}
}
}
impl IterTokens for AbilityMember<'_> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
let Self { name, typ } = self;
(onetoken(Token::Function, name.region, arena).into_iter())
.chain(typ.iter_tokens(arena))
.collect_in(arena)
}
}
impl IterTokens for ValueDef<'_> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
match self {
ValueDef::Annotation(pattern, annotation) => (pattern.iter_tokens(arena).into_iter())
.chain(annotation.iter_tokens(arena))
.collect_in(arena),
ValueDef::Body(pattern, body) => (pattern.iter_tokens(arena).into_iter())
.chain(body.iter_tokens(arena))
.collect_in(arena),
ValueDef::AnnotatedBody {
ann_pattern,
ann_type,
comment: _,
body_pattern,
body_expr,
} => (ann_pattern.iter_tokens(arena).into_iter())
.chain(ann_type.iter_tokens(arena))
.chain(body_pattern.iter_tokens(arena))
.chain(body_expr.iter_tokens(arena))
.collect_in(arena),
ValueDef::Dbg {
preceding_comment,
condition,
}
| ValueDef::Expect {
preceding_comment,
condition,
}
| ValueDef::ExpectFx {
preceding_comment,
condition,
} => (onetoken(Token::Comment, *preceding_comment, arena).into_iter())
.chain(condition.iter_tokens(arena))
.collect_in(arena),
ValueDef::Stmt(loc_expr) => loc_expr.iter_tokens(arena),
}
}
}
impl IterTokens for &Loc<Expr<'_>> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
(**self).iter_tokens(arena)
}
}
impl IterTokens for Loc<Expr<'_>> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
let region = self.region;
match self.value {
Expr::Float(_) => onetoken(Token::Number, region, arena),
Expr::Num(_) => onetoken(Token::Number, region, arena),
Expr::NonBase10Int { .. } => onetoken(Token::Number, region, arena),
Expr::Str(_) => onetoken(Token::String, region, arena),
Expr::SingleQuote(_) => onetoken(Token::String, region, arena),
Expr::RecordAccess(rcd, _field) => Loc::at(region, *rcd).iter_tokens(arena),
Expr::AccessorFunction(accessor) => Loc::at(region, accessor).iter_tokens(arena),
Expr::TupleAccess(tup, _field) => Loc::at(region, *tup).iter_tokens(arena),
Expr::List(lst) => lst.iter_tokens(arena),
Expr::RecordUpdate { update, fields } => (update.iter_tokens(arena).into_iter())
.chain(fields.iter().flat_map(|f| f.iter_tokens(arena)))
.collect_in(arena),
Expr::Record(rcd) => rcd.iter_tokens(arena),
Expr::Tuple(tup) => tup.iter_tokens(arena),
Expr::RecordBuilder(rb) => rb.iter_tokens(arena),
Expr::IngestedFile(_path, ty) => ty.iter_tokens(arena),
Expr::Var { .. } => onetoken(Token::Variable, region, arena),
Expr::Underscore(_) => onetoken(Token::Variable, region, arena),
Expr::Crash => onetoken(Token::Keyword, region, arena),
Expr::Tag(_) => onetoken(Token::Tag, region, arena),
Expr::OpaqueRef(_) => onetoken(Token::Type, region, arena),
Expr::Closure(patterns, body) => (patterns.iter_tokens(arena).into_iter())
.chain(body.iter_tokens(arena))
.collect_in(arena),
Expr::Defs(defs, exprs) => (defs.iter_tokens(arena).into_iter())
.chain(exprs.iter_tokens(arena))
.collect_in(arena),
Expr::Backpassing(patterns, e1, e2) => (patterns.iter_tokens(arena).into_iter())
.chain(e1.iter_tokens(arena))
.chain(e2.iter_tokens(arena))
.collect_in(arena),
Expr::Expect(e1, e2) => (e1.iter_tokens(arena).into_iter())
.chain(e2.iter_tokens(arena))
.collect_in(arena),
Expr::Dbg(e1, e2) => (e1.iter_tokens(arena).into_iter())
.chain(e2.iter_tokens(arena))
.collect_in(arena),
Expr::LowLevelDbg(_, e1, e2) => (e1.iter_tokens(arena).into_iter())
.chain(e2.iter_tokens(arena))
.collect_in(arena),
Expr::Apply(e1, e2, _called_via) => (e1.iter_tokens(arena).into_iter())
.chain(e2.iter_tokens(arena))
.collect_in(arena),
Expr::BinOps(e1, e2) => (e1.iter_tokens(arena).into_iter())
.chain(e2.iter_tokens(arena))
.collect_in(arena),
Expr::UnaryOp(e1, op) => (op.iter_tokens(arena).into_iter())
.chain(e1.iter_tokens(arena))
.collect_in(arena),
Expr::If(e1, e2) => (e1.iter_tokens(arena).into_iter())
.chain(e2.iter_tokens(arena))
.collect_in(arena),
Expr::When(e, branches) => (e.iter_tokens(arena).into_iter())
.chain(branches.iter_tokens(arena))
.collect_in(arena),
Expr::SpaceBefore(e, _) | Expr::SpaceAfter(e, _) => {
Loc::at(region, *e).iter_tokens(arena)
}
Expr::ParensAround(e) => Loc::at(region, *e).iter_tokens(arena),
Expr::MultipleRecordBuilders(e) => e.iter_tokens(arena),
Expr::UnappliedRecordBuilder(e) => e.iter_tokens(arena),
Expr::MalformedIdent(_, _)
| Expr::MalformedClosure
| Expr::PrecedenceConflict(_)
| Expr::EmptyDefsFinal
| Expr::MalformedSuffixed(_) => {
bumpvec![in arena;]
}
}
}
}
impl IterTokens for Loc<Accessor<'_>> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
match self.value {
Accessor::RecordField(_) => onetoken(Token::Function, self.region, arena),
Accessor::TupleIndex(_) => onetoken(Token::Function, self.region, arena),
}
}
}
impl IterTokens for Loc<RecordBuilderField<'_>> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
match self.value {
RecordBuilderField::Value(field, _, e)
| RecordBuilderField::ApplyValue(field, _, _, e) => field_token(field.region, arena)
.into_iter()
.chain(e.iter_tokens(arena))
.collect_in(arena),
RecordBuilderField::LabelOnly(field) => field_token(field.region, arena),
RecordBuilderField::SpaceBefore(rbf, _) | RecordBuilderField::SpaceAfter(rbf, _) => {
Loc::at(self.region, *rbf).iter_tokens(arena)
}
RecordBuilderField::Malformed(_) => bumpvec![in arena;],
}
}
}
impl IterTokens for &WhenBranch<'_> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
let WhenBranch {
patterns,
value,
guard,
} = self;
(patterns.iter_tokens(arena).into_iter())
.chain(value.iter_tokens(arena))
.chain(guard.iter().flat_map(|g| g.iter_tokens(arena)))
.collect_in(arena)
}
}
impl IterTokens for Loc<Pattern<'_>> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
let region = self.region;
match self.value {
Pattern::Identifier { .. } => onetoken(Token::Variable, region, arena),
Pattern::Tag(_) => onetoken(Token::Tag, region, arena),
Pattern::OpaqueRef(_) => onetoken(Token::Type, region, arena),
Pattern::Apply(p1, p2) => (p1.iter_tokens(arena).into_iter())
.chain(p2.iter_tokens(arena))
.collect_in(arena),
Pattern::RecordDestructure(ps) => ps.iter_tokens(arena),
Pattern::RequiredField(_field, p) => p.iter_tokens(arena),
Pattern::OptionalField(_field, p) => p.iter_tokens(arena),
Pattern::NumLiteral(_) => onetoken(Token::Number, region, arena),
Pattern::NonBase10Literal { .. } => onetoken(Token::Number, region, arena),
Pattern::FloatLiteral(_) => onetoken(Token::Number, region, arena),
Pattern::StrLiteral(_) => onetoken(Token::String, region, arena),
Pattern::Underscore(_) => onetoken(Token::Variable, region, arena),
Pattern::SingleQuote(_) => onetoken(Token::String, region, arena),
Pattern::Tuple(ps) => ps.iter_tokens(arena),
Pattern::List(ps) => ps.iter_tokens(arena),
Pattern::ListRest(None) => bumpvec![in arena;],
Pattern::ListRest(Some((_, pas))) => pas.iter_tokens(arena),
Pattern::As(p1, pas) => (p1.iter_tokens(arena).into_iter())
.chain(pas.iter_tokens(arena))
.collect_in(arena),
Pattern::SpaceBefore(p, _) | Pattern::SpaceAfter(p, _) => {
Loc::at(region, *p).iter_tokens(arena)
}
Pattern::QualifiedIdentifier { .. } => onetoken(Token::Variable, region, arena),
Pattern::Malformed(_) | Pattern::MalformedIdent(_, _) => bumpvec![in arena;],
}
}
}
impl IterTokens for PatternAs<'_> {
fn iter_tokens<'a>(&self, arena: &'a Bump) -> BumpVec<'a, Loc<Token>> {
let Self {
spaces_before: _,
identifier,
} = self;
onetoken(Token::Variable, identifier.region, arena)
}
}

View file

@ -0,0 +1,24 @@
use roc_module::symbol::{Interns, ModuleId};
use roc_types::subs::{Subs, Variable};
pub(super) fn format_var_type(
var: Variable,
subs: &mut Subs,
module_id: &ModuleId,
interns: &Interns,
) -> String {
let snapshot = subs.snapshot();
let type_str = roc_types::pretty_print::name_and_print_var(
var,
subs,
*module_id,
interns,
roc_types::pretty_print::DebugPrint::NOTHING,
);
subs.rollback_to(snapshot);
type_str
}
pub(super) fn is_roc_identifier_char(char: &char) -> bool {
matches!(char,'a'..='z'|'A'..='Z'|'0'..='9'|'.')
}

View file

@ -0,0 +1,245 @@
use roc_region::all::{LineColumn, LineColumnRegion, LineInfo, Region};
use tower_lsp::lsp_types::{Position, Range};
pub(crate) trait ToRange {
type Feed;
fn to_range(&self, feed: &Self::Feed) -> Range;
}
impl ToRange for Region {
type Feed = LineInfo;
fn to_range(&self, line_info: &LineInfo) -> Range {
let LineColumnRegion { start, end } = line_info.convert_region(*self);
Range {
start: Position {
line: start.line,
character: start.column,
},
end: Position {
line: end.line,
character: end.column,
},
}
}
}
pub(crate) trait ToRegion {
type Feed;
fn to_region(&self, feed: &Self::Feed) -> Region;
}
impl ToRegion for Range {
type Feed = LineInfo;
fn to_region(&self, line_info: &LineInfo) -> Region {
let lc_region = LineColumnRegion {
start: LineColumn {
line: self.start.line,
column: self.start.character,
},
end: LineColumn {
line: self.end.line,
column: self.end.line,
},
};
line_info.convert_line_column_region(lc_region)
}
}
pub(crate) trait ToRocPosition {
type Feed;
fn to_roc_position(&self, feed: &Self::Feed) -> roc_region::all::Position;
}
impl ToRocPosition for tower_lsp::lsp_types::Position {
type Feed = LineInfo;
fn to_roc_position(&self, line_info: &LineInfo) -> roc_region::all::Position {
let lc = LineColumn {
line: self.line,
column: self.character,
};
line_info.convert_line_column(lc)
}
}
pub(crate) mod diag {
use std::path::Path;
use roc_load::LoadingProblem;
use roc_region::all::{LineInfo, Region};
use roc_solve_problem::TypeError;
use roc_problem::Severity;
use roc_reporting::report::RocDocAllocator;
use tower_lsp::lsp_types::{Diagnostic, DiagnosticSeverity, Position, Range};
use super::ToRange;
pub trait IntoLspSeverity {
fn into_lsp_severity(self) -> DiagnosticSeverity;
}
impl IntoLspSeverity for Severity {
fn into_lsp_severity(self) -> DiagnosticSeverity {
match self {
Severity::RuntimeError => DiagnosticSeverity::ERROR,
Severity::Warning => DiagnosticSeverity::WARNING,
Severity::Fatal => DiagnosticSeverity::ERROR,
}
}
}
pub trait IntoLspDiagnostic<'a> {
type Feed;
fn into_lsp_diagnostic(self, feed: &'a Self::Feed) -> Option<Diagnostic>;
}
impl IntoLspDiagnostic<'_> for &LoadingProblem<'_> {
type Feed = ();
fn into_lsp_diagnostic(self, _feed: &()) -> Option<Diagnostic> {
let range = Range {
start: Position {
line: 0,
character: 0,
},
end: Position {
line: 0,
character: 1,
},
};
let msg = match self {
LoadingProblem::FileProblem { filename, error } => {
format!(
"Failed to load {} due to an I/O error: {}",
filename.display(),
error
)
}
LoadingProblem::ParsingFailed(fe) => {
let problem = &fe.problem.problem;
format!("Failed to parse Roc source file: {problem:?}")
}
LoadingProblem::UnexpectedHeader(header) => {
format!("Unexpected header: {}", header)
}
LoadingProblem::ChannelProblem(_) => {
"Internal error: message channel died".to_string()
}
LoadingProblem::ErrJoiningWorkerThreads => {
"Internal error: analysis worker threads died".to_string()
}
LoadingProblem::TriedToImportAppModule => {
"Attempted to import app module".to_string()
}
LoadingProblem::FormattedReport(report) => report.clone(),
LoadingProblem::ImportCycle(_, _) => {
"Circular dependency between modules".to_string()
}
LoadingProblem::IncorrectModuleName(_) => "Incorrect module name".to_string(),
LoadingProblem::CouldNotFindCacheDir => {
format!(
"Could not find Roc cache directory {}",
roc_packaging::cache::roc_cache_dir().display()
)
}
};
Some(Diagnostic {
range,
severity: Some(DiagnosticSeverity::ERROR),
code: None,
code_description: None,
source: Some("load".to_owned()),
message: msg,
related_information: None,
tags: None,
data: None,
})
}
}
pub struct ProblemFmt<'a> {
pub alloc: &'a RocDocAllocator<'a>,
pub line_info: &'a LineInfo,
pub path: &'a Path,
}
impl<'a> IntoLspDiagnostic<'a> for roc_problem::can::Problem {
type Feed = ProblemFmt<'a>;
fn into_lsp_diagnostic(self, fmt: &'a ProblemFmt<'a>) -> Option<Diagnostic> {
let range = self
.region()
.unwrap_or_else(Region::zero)
.to_range(fmt.line_info);
let report = roc_reporting::report::can_problem(
fmt.alloc,
fmt.line_info,
fmt.path.to_path_buf(),
self,
);
let severity = report.severity.into_lsp_severity();
let mut msg = String::new();
report.render_ci(&mut msg, fmt.alloc);
Some(Diagnostic {
range,
severity: Some(severity),
code: None,
code_description: None,
source: None,
message: msg,
related_information: None,
tags: None,
data: None,
})
}
}
impl<'a> IntoLspDiagnostic<'a> for TypeError {
type Feed = ProblemFmt<'a>;
fn into_lsp_diagnostic(self, fmt: &'a ProblemFmt<'a>) -> Option<Diagnostic> {
let range = self
.region()
.unwrap_or_else(Region::zero)
.to_range(fmt.line_info);
let report = roc_reporting::report::type_problem(
fmt.alloc,
fmt.line_info,
fmt.path.to_path_buf(),
self,
)?;
let severity = report.severity.into_lsp_severity();
let mut msg = String::new();
report.render_ci(&mut msg, fmt.alloc);
Some(Diagnostic {
range,
severity: Some(severity),
code: None,
code_description: None,
source: None,
message: msg,
related_information: None,
tags: None,
data: None,
})
}
}
}

View file

@ -0,0 +1,219 @@
use log::{debug, info, trace};
use std::{
collections::HashMap,
sync::{Arc, OnceLock},
time::Duration,
};
use tokio::sync::{Mutex, MutexGuard};
use tower_lsp::lsp_types::{
CompletionResponse, Diagnostic, GotoDefinitionResponse, Hover, Position, SemanticTokensResult,
TextEdit, Url,
};
use crate::analysis::{AnalyzedDocument, DocInfo};
#[derive(Debug)]
pub(crate) struct DocumentPair {
info: DocInfo,
latest_document: OnceLock<Arc<AnalyzedDocument>>,
last_good_document: Arc<AnalyzedDocument>,
}
impl DocumentPair {
pub(crate) fn new(
latest_doc: Arc<AnalyzedDocument>,
last_good_document: Arc<AnalyzedDocument>,
) -> Self {
Self {
info: latest_doc.doc_info.clone(),
latest_document: OnceLock::from(latest_doc),
last_good_document,
}
}
}
#[derive(Debug)]
pub(crate) struct RegistryConfig {
pub(crate) latest_document_timeout: Duration,
}
impl Default for RegistryConfig {
fn default() -> Self {
Self {
latest_document_timeout: Duration::from_millis(5000),
}
}
}
#[derive(Debug, Default)]
pub(crate) struct Registry {
documents: Mutex<HashMap<Url, DocumentPair>>,
config: RegistryConfig,
}
impl Registry {
pub(crate) fn new(config: RegistryConfig) -> Self {
Self {
documents: Default::default(),
config,
}
}
pub async fn get_latest_version(&self, url: &Url) -> Option<i32> {
self.documents.lock().await.get(url).map(|x| x.info.version)
}
fn update_document(
documents: &mut MutexGuard<'_, HashMap<Url, DocumentPair>>,
document: Arc<AnalyzedDocument>,
updating_url: &Url,
) {
if &document.doc_info.url == updating_url {
//Write the newly analysed document into the oncelock that any request requiring the latest document will be waiting on
if let Some(a) = documents.get_mut(updating_url) {
a.latest_document.set(document.clone()).unwrap()
}
}
let url = document.url().clone();
match documents.get_mut(&url) {
Some(old_doc) => {
//If the latest doc_info has a version higher than what we are setting we shouldn't overwrite the document, but we can update the last_good_document if the parse went well
if old_doc.info.version > document.doc_info.version {
if document.type_checked() {
*old_doc = DocumentPair {
info: old_doc.info.clone(),
latest_document: old_doc.latest_document.clone(),
last_good_document: document,
};
}
} else if document.type_checked() {
*old_doc = DocumentPair::new(document.clone(), document);
} else {
debug!(
"Document typechecking failed at version {:?}, not updating last_good_document",
&document.doc_info.version
);
*old_doc = DocumentPair::new(document, old_doc.last_good_document.clone());
}
}
None => {
documents.insert(url.clone(), DocumentPair::new(document.clone(), document));
}
}
}
pub async fn apply_changes<'a>(&self, analysed_docs: Vec<AnalyzedDocument>, updating_url: Url) {
let mut documents = self.documents.lock().await;
debug!(
"Finished doc analysis for doc: {}",
updating_url.to_string()
);
for document in analysed_docs {
let document = Arc::new(document);
Registry::update_document(&mut documents, document, &updating_url);
}
}
pub async fn apply_doc_info_changes(&self, url: Url, info: DocInfo) {
let mut documents_lock = self.documents.lock().await;
let doc = documents_lock.get_mut(&url);
match doc {
Some(a) => {
debug!(
"Set the docInfo for {:?} to version:{:?}",
url.as_str(),
info.version
);
*a = DocumentPair {
info,
last_good_document: a.last_good_document.clone(),
latest_document: OnceLock::new(),
};
}
None => debug!("So existing docinfo for {:?} ", url.as_str()),
}
}
async fn document_info_by_url(&self, url: &Url) -> Option<DocInfo> {
self.documents.lock().await.get(url).map(|a| a.info.clone())
}
///Tries to get the latest document from analysis.
///Gives up and returns none after 5 seconds.
async fn latest_document_by_url(&self, url: &Url) -> Option<Arc<AnalyzedDocument>> {
tokio::time::timeout(self.config.latest_document_timeout, async {
//TODO: This should really be a condvar that is triggered by the latest being ready, this will do for now though
loop {
let docs = self.documents.lock().await;
if let Some(a) = docs.get(url) {
if let Some(a) = a.latest_document.get() {
return a.clone();
}
}
drop(docs);
tokio::task::yield_now().await;
}
})
.await
.ok()
}
pub async fn diagnostics(&self, url: &Url) -> Vec<Diagnostic> {
let Some(document) = self.latest_document_by_url(url).await else {
return vec![];
};
document.diagnostics()
}
pub async fn hover(&self, url: &Url, position: Position) -> Option<Hover> {
self.latest_document_by_url(url).await?.hover(position)
}
pub async fn goto_definition(
&self,
url: &Url,
position: Position,
) -> Option<GotoDefinitionResponse> {
let document = self.latest_document_by_url(url).await?;
let symbol = document.symbol_at(position)?;
let def_document_url = document.module_url(symbol.module_id())?;
let def_document = self.latest_document_by_url(&def_document_url).await?;
def_document.definition(symbol)
}
pub async fn formatting(&self, url: &Url) -> Option<Vec<TextEdit>> {
let document = self.document_info_by_url(url).await?;
document.format()
}
pub async fn semantic_tokens(&self, url: &Url) -> Option<SemanticTokensResult> {
let document = self.document_info_by_url(url).await?;
document.semantic_tokens()
}
pub async fn completion_items(
&self,
url: &Url,
position: Position,
) -> Option<CompletionResponse> {
trace!("Starting completion ");
let lock = self.documents.lock().await;
let pair = lock.get(url)?;
let latest_doc_info = &pair.info;
info!(
"Using document version:{:?} for completion ",
latest_doc_info.version
);
let completions = pair
.last_good_document
.completion_items(position, latest_doc_info)?;
Some(CompletionResponse::Array(completions))
}
}

View file

@ -0,0 +1,597 @@
use analysis::HIGHLIGHT_TOKENS_LEGEND;
use log::{debug, trace};
use registry::{Registry, RegistryConfig};
use std::future::Future;
use std::panic::AssertUnwindSafe;
use std::time::Duration;
use tower_lsp::jsonrpc::{self, Result};
use tower_lsp::lsp_types::*;
use tower_lsp::{Client, LanguageServer, LspService, Server};
use crate::analysis::{global_analysis, DocInfo};
mod analysis;
mod convert;
mod registry;
struct RocServer {
pub state: RocServerState,
client: Client,
}
struct RocServerConfig {
pub debounce_ms: Duration,
}
impl Default for RocServerConfig {
fn default() -> Self {
Self {
debounce_ms: Duration::from_millis(100),
}
}
}
///This exists so we can test most of RocLs without anything LSP related
struct RocServerState {
registry: Registry,
config: RocServerConfig,
}
impl std::panic::RefUnwindSafe for RocServer {}
fn read_env_num(name: &str) -> Option<u64> {
std::env::var(name)
.ok()
.and_then(|a| str::parse::<u64>(&a).ok())
}
impl RocServer {
pub fn new(client: Client) -> Self {
let registry_config = RegistryConfig {
latest_document_timeout: Duration::from_millis(
read_env_num("ROCLS_LATEST_DOC_TIMEOUT_MS").unwrap_or(5000),
),
};
let config = RocServerConfig {
debounce_ms: Duration::from_millis(read_env_num("ROCLS_DEBOUNCE_MS").unwrap_or(100)),
};
Self {
state: RocServerState::new(config, Registry::new(registry_config)),
client,
}
}
pub fn capabilities() -> ServerCapabilities {
let text_document_sync = TextDocumentSyncCapability::Options(
// TODO: later on make this incremental
TextDocumentSyncOptions {
open_close: Some(true),
change: Some(TextDocumentSyncKind::FULL),
..TextDocumentSyncOptions::default()
},
);
let hover_provider = HoverProviderCapability::Simple(true);
let definition_provider = DefinitionOptions {
work_done_progress_options: WorkDoneProgressOptions {
work_done_progress: None,
},
};
let document_formatting_provider = DocumentFormattingOptions {
work_done_progress_options: WorkDoneProgressOptions {
work_done_progress: None,
},
};
let semantic_tokens_provider =
SemanticTokensServerCapabilities::SemanticTokensOptions(SemanticTokensOptions {
work_done_progress_options: WorkDoneProgressOptions {
work_done_progress: None,
},
legend: SemanticTokensLegend {
token_types: HIGHLIGHT_TOKENS_LEGEND.into(),
token_modifiers: vec![],
},
range: None,
full: Some(SemanticTokensFullOptions::Bool(true)),
});
let completion_provider = CompletionOptions {
resolve_provider: Some(false),
trigger_characters: Some(vec![".".to_string()]),
all_commit_characters: None,
work_done_progress_options: WorkDoneProgressOptions {
work_done_progress: None,
},
};
ServerCapabilities {
text_document_sync: Some(text_document_sync),
hover_provider: Some(hover_provider),
definition_provider: Some(OneOf::Right(definition_provider)),
document_formatting_provider: Some(OneOf::Right(document_formatting_provider)),
semantic_tokens_provider: Some(semantic_tokens_provider),
completion_provider: Some(completion_provider),
..ServerCapabilities::default()
}
}
/// Records a document content change.
async fn change(&self, fi: Url, text: String, version: i32) {
let updating_result = self.state.change(&fi, text, version).await;
//The analysis task can be cancelled by another change coming in which will update the watched variable
if let Err(e) = updating_result {
debug!("Cancelled change. Reason:{:?}", e);
return;
}
debug!("Applied_changes getting and returning diagnostics");
let diagnostics = self.state.registry.diagnostics(&fi).await;
self.client
.publish_diagnostics(fi, diagnostics, Some(version))
.await;
}
}
impl RocServerState {
pub fn new(config: RocServerConfig, registry: Registry) -> RocServerState {
Self { config, registry }
}
async fn close(&self, _fi: Url) {}
pub async fn change(
&self,
fi: &Url,
text: String,
version: i32,
) -> std::result::Result<(), String> {
debug!("V{:?}:starting change", version);
let doc_info = DocInfo::new(fi.clone(), text, version);
self.registry
.apply_doc_info_changes(fi.clone(), doc_info.clone())
.await;
debug!(
"V{:?}:finished updating docinfo, starting analysis ",
version
);
let inner_ref = self;
let updating_result = async {
//This reduces wasted computation by waiting to allow a new change to come in and update the version before we check, but does delay the final analysis. Ideally this would be replaced with cancelling the analysis when a new one comes in.
tokio::time::sleep(self.config.debounce_ms).await;
let is_latest = inner_ref
.registry
.get_latest_version(fi)
.await
.map(|latest| latest == version)
.unwrap_or(true);
if !is_latest {
return Err("Not latest version skipping analysis".to_string());
}
let results = match tokio::task::spawn_blocking(|| global_analysis(doc_info)).await {
Err(e) => return Err(format!("Document analysis failed. reason:{:?}", e)),
Ok(a) => a,
};
let latest_version = inner_ref.registry.get_latest_version(fi).await;
//if this version is not the latest another change must have come in and this analysis is useless
//if there is no older version we can just proceed with the update
if let Some(latest_version) = latest_version {
if latest_version != version {
return Err(format!(
"Version {0} doesn't match latest: {1} discarding analysis",
version, latest_version
));
}
}
debug!(
"V{:?}:finished document analysis applying changes ",
version
);
inner_ref.registry.apply_changes(results, fi.clone()).await;
Ok(())
}
.await;
debug!("V{:?}:finished document change process", version);
updating_result
}
}
#[tower_lsp::async_trait]
impl LanguageServer for RocServer {
async fn initialize(&self, _: InitializeParams) -> Result<InitializeResult> {
Ok(InitializeResult {
capabilities: Self::capabilities(),
..InitializeResult::default()
})
}
async fn initialized(&self, _: InitializedParams) {
self.client
.log_message(MessageType::INFO, "Roc language server initialized.")
.await;
}
async fn did_open(&self, params: DidOpenTextDocumentParams) {
let TextDocumentItem {
uri, text, version, ..
} = params.text_document;
self.change(uri, text, version).await;
}
async fn did_change(&self, params: DidChangeTextDocumentParams) {
let VersionedTextDocumentIdentifier { uri, version, .. } = params.text_document;
// NOTE: We specify that we expect full-content syncs in the server capabilities,
// so here we assume the only change passed is a change of the entire document's content.
let TextDocumentContentChangeEvent { text, .. } =
params.content_changes.into_iter().next().unwrap();
self.change(uri, text, version).await;
}
async fn did_close(&self, params: DidCloseTextDocumentParams) {
let TextDocumentIdentifier { uri } = params.text_document;
self.state.close(uri).await;
}
async fn shutdown(&self) -> Result<()> {
Ok(())
}
async fn hover(&self, params: HoverParams) -> Result<Option<Hover>> {
let HoverParams {
text_document_position_params:
TextDocumentPositionParams {
text_document,
position,
},
work_done_progress_params: _,
} = params;
unwind_async(self.state.registry.hover(&text_document.uri, position)).await
}
async fn goto_definition(
&self,
params: GotoDefinitionParams,
) -> Result<Option<GotoDefinitionResponse>> {
let GotoDefinitionParams {
text_document_position_params:
TextDocumentPositionParams {
text_document,
position,
},
work_done_progress_params: _,
partial_result_params: _,
} = params;
unwind_async(
self.state
.registry
.goto_definition(&text_document.uri, position),
)
.await
}
async fn formatting(&self, params: DocumentFormattingParams) -> Result<Option<Vec<TextEdit>>> {
let DocumentFormattingParams {
text_document,
options: _,
work_done_progress_params: _,
} = params;
unwind_async(self.state.registry.formatting(&text_document.uri)).await
}
async fn semantic_tokens_full(
&self,
params: SemanticTokensParams,
) -> Result<Option<SemanticTokensResult>> {
let SemanticTokensParams {
text_document,
work_done_progress_params: _,
partial_result_params: _,
} = params;
unwind_async(self.state.registry.semantic_tokens(&text_document.uri)).await
}
async fn completion(&self, params: CompletionParams) -> Result<Option<CompletionResponse>> {
let doc = params.text_document_position;
trace!("Got completion request.");
unwind_async(
self.state
.registry
.completion_items(&doc.text_document.uri, doc.position),
)
.await
}
}
async fn unwind_async<Fut, T>(future: Fut) -> tower_lsp::jsonrpc::Result<T>
where
Fut: Future<Output = T>,
{
let result = { futures::FutureExt::catch_unwind(AssertUnwindSafe(future)).await };
match result {
Ok(a) => tower_lsp::jsonrpc::Result::Ok(a),
Err(err) => tower_lsp::jsonrpc::Result::Err(jsonrpc::Error {
code: jsonrpc::ErrorCode::InternalError,
message: format!("{:?}", err),
data: None,
}),
}
}
#[tokio::main]
async fn main() {
env_logger::Builder::from_env("ROCLS_LOG").init();
let stdin = tokio::io::stdin();
let stdout = tokio::io::stdout();
let (service, socket) = LspService::new(RocServer::new);
Server::new(stdin, stdout, socket).serve(service).await;
}
#[cfg(test)]
mod tests {
use std::sync::Once;
use expect_test::expect;
use indoc::indoc;
use log::info;
use super::*;
fn completion_resp_to_strings(
resp: CompletionResponse,
) -> Vec<(String, Option<Documentation>)> {
match resp {
CompletionResponse::Array(list) => list.into_iter(),
CompletionResponse::List(list) => list.items.into_iter(),
}
.map(|item| (item.label, item.documentation))
.collect::<Vec<_>>()
}
/// gets completion and returns only the label and docs for each completion
async fn get_basic_completion_info(
reg: &Registry,
url: &Url,
position: Position,
) -> Option<Vec<(String, Option<Documentation>)>> {
reg.completion_items(url, position)
.await
.map(completion_resp_to_strings)
}
/// gets completion and returns only the label for each completion
fn comp_labels(
completions: Option<Vec<(String, Option<Documentation>)>>,
) -> Option<Vec<String>> {
completions.map(|list| list.into_iter().map(|(labels, _)| labels).collect())
}
const DOC_LIT: &str = indoc! {r#"
interface Test
exposes []
imports []
"#};
static INIT: Once = Once::new();
async fn test_setup(doc: String) -> (RocServerState, Url) {
INIT.call_once(|| {
env_logger::builder()
.is_test(true)
.filter_level(log::LevelFilter::Debug)
.init();
});
info!("Doc is:\n{0}", doc);
let url = Url::parse("file:/Test.roc").unwrap();
let inner = RocServerState::new(RocServerConfig::default(), Registry::default());
// setup the file
inner.change(&url, doc, 0).await.unwrap();
(inner, url)
}
/// Runs a basic completion and returns the response
async fn completion_test(
initial: &str,
addition: &str,
position: Position,
) -> Option<Vec<(String, Option<Documentation>)>> {
let doc = DOC_LIT.to_string() + initial;
let (inner, url) = test_setup(doc.clone()).await;
let registry = &inner.registry;
let change = doc.clone() + addition;
info!("doc is:\n{0}", change);
inner.change(&url, change, 1).await.unwrap();
get_basic_completion_info(registry, &url, position).await
}
async fn completion_test_labels(
initial: &str,
addition: &str,
position: Position,
) -> Option<Vec<String>> {
comp_labels(completion_test(initial, addition, position).await)
}
/// Test that completion works properly when we apply an "as" pattern to an identifier
#[tokio::test]
async fn test_completion_as_identifier() {
let suffix = DOC_LIT.to_string()
+ indoc! {r#"
main =
when a is
inn as outer ->
"#};
let (inner, url) = test_setup(suffix.clone()).await;
let position = Position::new(6, 7);
let registry = &inner.registry;
let change = suffix.clone() + "o";
inner.change(&url, change, 1).await.unwrap();
let comp1 = comp_labels(get_basic_completion_info(registry, &url, position).await);
let c = suffix.clone() + "i";
inner.change(&url, c, 2).await.unwrap();
let comp2 = comp_labels(get_basic_completion_info(registry, &url, position).await);
let actual = [comp1, comp2];
expect![[r#"
[
Some(
[
"outer",
],
),
Some(
[
"inn",
"outer",
],
),
]
"#]]
.assert_debug_eq(&actual)
}
/// Tests that completion works properly when we apply an "as" pattern to a record.
#[tokio::test]
async fn test_completion_as_record() {
let doc = DOC_LIT.to_string()
+ indoc! {r#"
main =
when a is
{one,two} as outer ->
"#};
let (inner, url) = test_setup(doc.clone()).await;
let position = Position::new(6, 7);
let reg = &inner.registry;
let change = doc.clone() + "o";
inner.change(&url, change, 1).await.unwrap();
let comp1 = comp_labels(get_basic_completion_info(reg, &url, position).await);
let c = doc.clone() + "t";
inner.change(&url, c, 2).await.unwrap();
let comp2 = comp_labels(get_basic_completion_info(reg, &url, position).await);
let actual = [comp1, comp2];
expect![[r#"
[
Some(
[
"one",
"two",
"outer",
],
),
Some(
[
"one",
"two",
"outer",
],
),
]
"#]]
.assert_debug_eq(&actual);
}
/// Test that completion works properly when we apply an "as" pattern to a record
#[tokio::test]
async fn test_completion_fun_params() {
let actual = completion_test_labels(
indoc! {r"
main = \param1, param2 ->
"},
"par",
Position::new(4, 3),
)
.await;
expect![[r#"
Some(
[
"param1",
"param2",
],
)
"#]]
.assert_debug_eq(&actual);
}
#[tokio::test]
async fn test_completion_closure() {
let actual = completion_test_labels(
indoc! {r"
main = [] |> List.map \ param1 , param2->
"},
"par",
Position::new(4, 3),
)
.await;
expect![[r#"
Some(
[
"param1",
"param2",
],
)
"#]]
.assert_debug_eq(&actual);
}
#[tokio::test]
async fn test_completion_with_docs() {
let actual = completion_test(
indoc! {r"
## This is the main function
main = mai
"},
"par",
Position::new(4, 10),
)
.await;
expect![[r#"
Some(
[
(
"main",
Some(
MarkupContent(
MarkupContent {
kind: Markdown,
value: "This is the main function",
},
),
),
),
],
)
"#]]
.assert_debug_eq(&actual);
}
}